Commit graph

353 commits

Author SHA1 Message Date
Anas Nashif 70cf96b5e1 syscall: z_thread_perms_all_clear -> k_thread_perms_all_clear
Rename internal function z_thread_perms_all_clear.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif 7a18c2b150 syscall: rename z_object_uninit -> k_object_uninit
Rename internal function z_object_uninit.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif 684b8fcdd0 syscall: Z_SYSCALL_VERIFY_MSG -> K_SYSCALL_VERIFY_MSG
Rename macros and do not use Z_ for internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif 4e396174ce kernel: move syscall_handler.h to internal include directory
Move the syscall_handler.h header, used internally only to a dedicated
internal folder that should not be used outside of Zephyr.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif a6b490073e kernel: object: rename z_object -> k_object
Do not use z_ for internal structures and rename to k_object instead.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-11-03 11:46:52 +01:00
Anas Nashif f0c7fbf0f1 kernel: move sched_priq.h to internal/ folder
This header is internal to the kernel and shall not be included directly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-30 18:43:28 +02:00
Peter Mitsis e6f1090553 kernel: Integrate object core statistics
Integrates object core statistics framework into the following
kernel objects:
  sys_mem_blocks, k_mem_slab
  threads, _cpu, z_kernel

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-09-30 08:04:14 +03:00
Peter Mitsis 6df8efe354 kernel: Integrate object cores into kernel
Integrates object cores into the following kernel structures
   sys_mem_blocks, k_mem_slab
   _cpu, z_kernel
   k_thread, k_timer
   k_condvar, k_event, k_mutex, k_sem
   k_mbox, k_msgq, k_pipe, k_fifo, k_lifo, k_stack

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-09-30 08:04:14 +03:00
Daniel Leung 0a50ff366e kernel: rename z_current_get() to k_sched_current_thread_query()
The original idea of z_current_get() was to be the counterpart
of k_current_get() when thread local variable for current has
not been initialized if TLS is enabled, otherwise they are
the same function. Now since z_current_get() is being used
outside of core kernel, rename it under kernel namespace so
other subsystem can conceptually use them too.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-09-28 16:15:46 +02:00
Benjamin Cabé a46f1b9c33 kernel: Fix unused-parameter warnings
Add missing ARG_UNUSED where needed.

Signed-off-by: Benjamin Cabé <benjamin@zephyrproject.org>
2023-09-28 16:14:39 +02:00
Evgeniy Paltsev 54e0731666 kernel: SMP: allow more than 5 CPU cores
Previously we limit maximum number of CPU cores to 5, now be
bumping this restriction so we can use 12 cores.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2023-09-25 09:49:50 +02:00
Anas Nashif 8634c3b444 kernel: move wait_q.h header to be internal
This header does not expose any public APIs, so move it under
kernel/include and change files including it.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-09-12 12:55:36 -04:00
Florian Grandel cc4d1bd374 kernel: sched: optimize for Meta IRQs == coop prios
Combining Meta IRQs with cooperative threads requires extra care to
return to pre-empted cooperative threads when returning from a Meta IRQ.
This is only needed when there are cooperative threads that are not also
Meta IRQs. This PR saves some space & time when the number of Meta IRQs
is equal to the number of available cooperative threads.

Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
2023-08-28 20:15:44 +02:00
Grant Ramsay 45701e696a kernel: sched: Disable FPU context when thread ends
When `CONFIG_FPU_SHARING` is enabled each `k_thread` struct has a saved
floating point context (`saved_fp_context`). During a context switch, the
current FPU owner's (`_current_cpu->arch.fpu_owner`) registers are saved
to its `saved_fp_context`, and the destination threads FPU registers are
loaded from its `saved_fp_context`.

When a thread ends, it does not release ownership of the FPU
(`_current_cpu->arch.fpu_owner`). This is problematic if the `k_thread`
struct was allocated on the stack. The next context switch will save the
FPU registers into `k_thread -> saved_fp_context` which may now be out of
scope. This will likely (but not always) result in a crash.

Adding `arch_float_disable(thread);` when a thread ends disables
preservation of floating point context information, fixing this issue

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
2023-08-16 17:05:25 +02:00
Daniel Leung 9c0ff33e04 kernel: rename shadow variables
Renames	shadow variables found by -Wshadow.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-10 08:14:12 +00:00
Vadim Shakirov 73944c6157 kernel/sched: fix thread selection when ABORTING + PENDING
In commit d537267f, the check on thread abortion was moved from next_up
to z_get_next_switch_handle. However, next_up is also called from
z_swap_next_thread, so the check on thread abortion is now missing there.
This sometimes caused the thread to be stuck in ABORTING + PENDING state
during the test_smp_switch_torture in test/kernel/smp

To avoid such cases in the future, it is worth leaving the check in next_up

Signed-off-by: Vadim Shakirov <vadim.shakirov@syntacore.com>
2023-08-01 11:59:42 +02:00
Florian Grandel e256b7d244 kernel: spinlock: LOCKED -> K_SPINLOCK
Let the kernel use the new K_SPINLOCK macro and remove the alias.

Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
2023-07-10 09:27:21 +02:00
Andy Ross a08e23f68e kernel/sched: Fix SMP must-wait-for-switch conditions in abort/join
As discovered by Carlo Caione, the k_thread_join code had a case where
it detected it had been called on a thread already marked _THREAD_DEAD
and exited early.  That's not sufficient.  The thread state is mutated
from the thread itself on its exit path.  It may still be running!

Just like the code in z_swap(), we need to spin waiting on the other
CPU to write the switch handle before knowing it's safe to return,
otherwise the calling context might (and did) do something like
immediately k_thread_create() a new thread in the "dead" thread's
struct while it was still running on the other core.

There was also a similar case in k_thread_abort() which had the same
issue: it needs to spin waiting on the other CPU to kill the thread
via the same mechanism.

Fixes #58116

Originally-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Andy Ross b89e427bd6 kernel/sched: Rename/redocument wait_for_switch() -> z_sched_switch_spin()
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.

This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Andy Ross d537267fc3 kernel/sched: Fix thread selection misordering with aborted threads
When a running thread gets aborted asynchronously (this only happens
in SMP contexts, obviously) it gets flagged "aborting", but the actual
abort needs to happen in the thread's own context.  For convenience,
this was done in the next_up() routine that selects the next thread to
run at interrupt exit time.

But this check was being done AFTER the next candidate thread was
selected from the run queue.  Thread abort can wake up threads blocked
in k_thread_join(), and therefore these weren't seen as runable
threads, even if they should have been.

Executive summary: if you killed a thread running on another CPU, and
there was another thread joined to the killed thread that should have
run on that CPU, it wouldn't (until it received an interrupt or
otherwise reached a schedule point).

Move the abort check above the run queue inspection and into the
end-of-interrupt processing in z_get_next_switch_handle() (so it's
actually a mild performance boost as it's no longer part of the
cooperative context switch path).  Simple fix, subtle bug.

Fixes #58040

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-22 08:06:49 +00:00
Gerard Marull-Paretas 4863c5f05b sys/util: extend usage of DIV_ROUND_UP
Many areas of Zephyr divide and round up without using the DIV_ROUND_UP
macro. Make use of it, so that we make use of a tested system macro and
at the same time we make code more readable.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-04-12 16:42:29 +02:00
Nicolas Pitre 524ac8a29a sched: don't call k_sched_time_slice_set() during early init
All we really want here is to set default parameters. However
k_sched_time_slice_set() also calls z_reset_time_slice(_current)
which expects `_current` to be fully initialized.

Simply initialize `slice_ticks` and `slice_max_prio` with default values
directly. Unfortunately the compiler isn't smart enough to expand
k_ms_to_ticks_ceil32(CONFIG_TIMESLICE_SIZE) to a constant expression
at build time so we must do the conversion by hand (and it shouldn't
overflow due to the nature of the value).

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-04-03 19:16:48 -04:00
Nicolas Pitre 405611dc9e sched: remove restriction on single-tick time slices
Slice expirations are now based on the same timeout mechanism as
regular timers which have been recently fixed and proven to work with
single-tick periods.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-04-03 19:16:48 -04:00
Nicolas Pitre 907eea07f2 z_sched_init: don't use arch_num_cpus()
The reason for arch_num_cpus() is to be able to dynamically adapt to
the actual number of available CPUs at run time.

In the z_sched_init() case, it is not the number of active CPUs that
we need but rather the total number of potential CPUs, and that is
represented by CONFIG_MP_MAX_NUM_CPUS not arch_num_cpus().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-04-03 12:36:30 -04:00
Nicolas Pitre 5879d2d6c1 sched: minor time slicing cleanup
Make sliceable() the actual condition for a sliceable thread. Avoid
creating a slice timeout for non sliceable threads. Always reset
slice_expired even if the next thread is not sliceable. Fold
slice_expired_locked() into z_time_slice() to avoid the hidden
unlock/lock. Change `curr` to `thread` as this is not necessarily
the current thread (yet) being set. Make variables static.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-03-16 09:16:59 +01:00
Aastha Grover 877fc3d508 kernel: events: fix waitq timeout race condition
Updates events to prevent a timeout from corrupting the list of
threads that needs to be waken up.

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2023-03-09 09:22:21 +01:00
Aastha Grover 5537776898 kernel: Add z_sched_wake_thread API
This API wakes up a given thread and is also called from
z_thread_timeout()

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2023-03-09 09:22:21 +01:00
Andy Ross c5c3ad95de kernel/sched: Close hole with cross-core timeslice expirations
Moving timeslice events to timeouts isn't quite enough on SMP, as it's
still possible for systems that don't broadcast their timer interrupts
to end up handling an expiration for a foreign CPU.  There, we need an
IPI, and a symmetric call to z_time_slice() (which is itempotent and
fast) in the IPI ISR.

Signed-off-by: Andy Ross <andyross@google.com>
2023-03-09 09:21:12 +01:00
Andy Ross f3afd5a4c9 kernel/sched: Use kernel timeouts for timeslice expirations
Rework the fragile and ad-hoc computation of timeslice expirations
into per-CPU struct _timeout objects with regular callbacks.  The
expiration callbacks themselves simply set a per-cpu flag (they might
run on any CPU), which gets checked at the end of the timer ISR on
every CPU.

This simplifies logic and removes a bunch of code.  It also fixes at
least three bugs:

1. As @npitre discovered: On SMP, the number of ticks announced on any
given CPU is going to be a subset of all expired ticks.  This broke
the accounting of timeslice ticks, and effectively meant that
timeslicing only worked on SMP on systems where one CPU could hog all
the announcements, and only on that CPU.

2. The bootstrap path to arm the timer driver after setting the first
timeout in an empty list couldn't take into account
sys_clock_elapsed() ticks, as it didn't know whether it was being
called underneath an existing announce loop.  Now this code is no
longer responsible for knowing anything about time slicing at all.

3. Also on SMP, there was a case where two CPUs timeslicing
simultaneously could stomp on each others' timeouts in
z_set_timeout_expiry(), as neither had a way of knowing what the
other's state was.  CPUs could miss their own expiration and have to
wait for the slice expiration on the other CPU.  Now, timeouts are
global objects with simple expiration times, and there's no need for
that function at all.

Signed-off-by: Andy Ross <andyross@google.com>
2023-03-09 09:21:12 +01:00
Peter Mitsis 31dfd84fd5 kernel: pipes: Change method of unpending waiters
By the time the working list of readers/writers is processed, it is
possible that waiting reader/writer being processed had timed out
and is no longer on the wait queue. As such, we can not blindly
wake the next thread as that next thread might not be the thread we
had just been processing.

To address this, the calls to z_sched_wake() have been replaced
with z_unpend_thread() and z_ready_thread() so that a specific
thread can be safely targeted for waking.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-11 06:45:13 +09:00
Peter Mitsis ca58339e16 kernel: Add routine to walk a wait queue
Adds a routine to safely walk a specified wait queue and invoke a
custom callback function on each waiting thread.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-02-11 06:45:13 +09:00
Flavio Ceolin 2757e711e1 kernel: sched: Remove possible deadcode
Put z_priq_dumb_add inside a ifdef guard to avoid deadcode.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-01-09 12:07:28 -05:00
Gerard Marull-Paretas 737d799660 kernel: sched: fix ticks logging
- Logging supports printing 64-bit values now. Cast to unsigned long and
  use %lu all times.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-11-29 09:52:04 +01:00
Kumar Gala 4f458ba8de kernel: Convert away from CONFIG_MP_NUM_CPUS
Move runtime code to use arch_num_cpus() instead of CONFIG_MP_NUM_CPUS
and use CONFIG_MP_MAX_NUM_CPUS for ifdef and BUILD_ASSERT macros.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-31 17:09:14 +01:00
Kumar Gala a1195ae39b smp: Move for loops to use arch_num_cpus instead of CONFIG_MP_NUM_CPUS
Change for loops of the form:

for (i = 0; i < CONFIG_MP_NUM_CPUS; i++)
   ...

to

unsigned int num_cpus = arch_num_cpus();
for (i = 0; i < num_cpus; i++)
   ...

We do the call outside of the for loop so that it only happens once,
rather than on every iteration.

Signed-off-by: Kumar Gala <kumar.gala@intel.com>
2022-10-21 13:14:58 +02:00
Andy Ross c32f376e99 kernel/sched: Fix SMP race on pend
For historical reasons[1] suspending threads would release the
scheduler lock between pend() (which places the current thread onto a
wait queue) and z_swap() (which effects the context swtich).  This
process happens with the caller's lock held, so local interrupts are
masked.  But on SMP this opens a tiny race where another CPU could
grab the pended thread and switch to it while we were still executing
on its stack!

Fix this by elevating the "lock swap" code that already exists in the
(portable/switch-based) z_swap() code one level so that it happens in
z_pend_curr() also.  Now we hold the scheduler lock between pend and
the final context switch.

Note that this technique can't work for the older z_swap_irqlock()
implementation, which exists to vestigially support a few bits of arch
code (mostly direct interrupts) that don't work on SMP anyway.
Address with an assert to prevent future misuse.

[1] z_swap() is a historical API implemented in per-arch assembly for
    older architectures (like ARM32!).  It was designed to be called
    with what at the time was a global IRQ lock, so it doesn't
    understand the idea of a separate scheduler lock.  When we finally
    get all archictures on arch_switch() this design can be cleaned up
    quite a bit.

Signed-off-by: Andy Ross <andyross@google.com>
2022-10-11 12:16:38 -04:00
Kai Vehmanen e81ccef613 kernel/sched: fix condition for CPU mask set
When building with CONFIG_SCHED_CPU_MASK_PIN_ONLY=y, CPU mask
is fixed and cannot be changed while thread is running.

The current code asserts if thread state is anything but PREPARED.

We do however have interface like k_work_queue_start() where a thread is
started as part of the queue start. To allow user to set the pinned CPU
for the work queue thread, it needs to be possible to suspend the
thread, set the mask, and then call k_thread_resume(). This seems to be
a valid sequence, so relax the assert check to reflect this.

Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
2022-09-09 16:13:35 -04:00
Simon Hein 02cfbfea51 kernel: comply to coding guidelines MISRA C:2012 Rule 14.4
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)

Use `bool' instead of `int' to represent Boolean values.
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.

This commit is a subset of the original commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a

Signed-off-by: Simon Hein <SHein@baumer.com>
2022-07-21 06:16:16 -04:00
Andy Ross fb613594c7 kernel/sched: Panic on aborting essential threads
Documentation specifies that aborting/terminating/exiting essential
threads is a system panic condition, but we didn't actually implement
that and allowed it as for other threads. At least one app wants to
exploit this documented behavior as a "watchdog" kind of condition,
and that seems reasonable.  Do what we say we're supposed to do.

This also includes a small fix to a test, which seemed like it was
written to exercise exactly this condition.  Except that it failed to
detect whether or not a system fatal error was actually signaled and
was (incorrectly) indicating "success".  Check that we actually enter
the handler.

Fixes #45545

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-20 12:34:30 +02:00
Gerard Marull-Paretas cffefc818d kernel: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 09:26:20 +02:00
Jordan Yates 1ef647f396 kernel: add k_can_yield helper function
Implements a function that application and driver code can use to check
whether it is valid to yield (or block) in the current context. This
check is required for functions that can feasibly be run from multiple
contexts. The primary intended use case is power management transition
functions, which can be run by application code explicitly or
automatically in the idle thread by system PM.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2022-05-06 11:33:10 +02:00
Flavio Ceolin 551038e748 kernel: sched: Change cpu pin only for not executing threads
Do not allow changing the CPU which a thread is pinned when it is
already being executed. This allows further optimizations in some
platforms with incoherent memory since we can safely assume that the
thread will run in the same CPU and avoid invalidate / flush the
cache during context switches.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-05-04 13:46:48 -04:00
Andy Ross b4e9ef0691 kernel/sched: Defer IPI sending to schedule points
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.

This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI.  Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.

Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap).  This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-02 10:23:13 -05:00
Andy Ross 3267cd327e kernel/sched: Refactor IPI signaling
Minor cleanup, we had a bunch of duplicated #if logic to send IPIs,
put it all in one place.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-02 10:23:13 -05:00
Anas Nashif c9d0248867 kernel: introduce convinience apu to pin thread to a cpu
Add an API that clears cpu mask from a thread and sets it to a specific
CPU.

This is the equivelent of:

        k_thread_cpu_mask_clear(&thread);
	k_thread_cpu_mask_enable(&thread, cpu_idx);

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-04-19 13:05:09 -04:00
Nicolas Pitre c9e3e0d956 sched: formalize the passing of NULL to z_get_next_switch_handle()
This is an attempt at formally distinguishing and supporting the case
described in 40795 where an architecture doesn't preserve/restore the
complete thread state upon entering/exiting interrupt exception state.

This is mainly about promoting the current behavior from the accepted
workaround to a formal API specification. This workaround is currently
used on ARM64 but RISC-V requires it too.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-18 13:32:49 -04:00
Andy Ross 3e696896bf kernel: Add "per thread" timeslice mechanism
Zephyr's timeslice implementation has always been somewhat primitive.
You get a global timeslice that applies broadly to the whole bottom of
the priority space, with no ability (beyond that one priority
threshold) to tune it to work on certain threads, etc...

This adds an (optionally configurable) API that allows timeslicing to
be controlled on a per-thread basis: any thread at any priority can be
set to timeslice, for a configurable per-thread slice time, and at the
end of its slice a callback can be provided that can take action.
This allows the application to implement things like responsiveness
heuristics, "fair" scheduling algorithms, etc... without requiring
that facility in the core kernel.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-03-09 13:49:44 -05:00
Peter Mitsis 82c3d531a6 kernel: move thread usage routines to own file
Moves the CONFIG_SCHED_THREAD_USAGE block of code out of sched.c
into its own file. Not only do they employ their own private
spin lock, but it is expected that additional usage routines will be
added in the future.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Jeremy Bettis fb1c36f7fd build: hide z_priq_mq_add/z_priq_mq_remove
Move z_priq_mq_add and z_priq_mq_remove into #ifdef CONFIG_SCHED_MULTIQ
block, because they are only used with that config.

Signed-off-by: Jeremy Bettis <jbettis@google.com>
2022-01-04 11:52:10 -05:00
Peter Mitsis f8b76f3b03 kernel: add 'static' keyword to select routines
Applies the 'static' keyword to the following inlined routines:
    z_priq_dumb_add()
    z_priq_mq_add()
    z_priq_mq_remove()
As those routines are only used in one place, they no longer have
externally visible declarations.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2021-12-13 17:21:58 -05:00
Jeremy Bettis 1e0a36c655 build: Remove unused functions
Removed unused functions, or moved inside #ifdefs.

This allows using -Werror=unused-function on the clang compiler. Tested
by building the ChromeOS EC on all supported platforms with
-Werror=unused-functions.

Signed-off-by: Jeremy Bettis <jbettis@google.com>
2021-12-13 15:49:08 -05:00
Andy Ross 410f911018 kernel/sched: Separate idle from app thread stats in THREAD_USAGE
It turns out that we have a sample (though not a test) that really
does want to use "k_thread_runtime_stats_all_get()" to measure system
uptime.

Instead of breaking this needlessly, separate the accounting for idle
and non-idle threads.  The legacy API can report their sum, and the
more useful value is available via the kernel struct for future
analysis.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 52351458f4 kernel/sched: Add timing.h support to thread_usage
The runtime stats feature has always supported this, so use the same
kconfig to indirect the timing source in the same way.

(Personally I'm not a fan of the "timing" API, which really doesn't do
anything that the existing core "cycles" API does not except add a
bunch of code due to the separate implementation of frequency
management and conversion routines.  It comes from an era where
"cycles" were fixed to a MHz frequency clock on platforms like x86 yet
we had benchmarks that wanted to use the TSC.  Those days are behind
us and "cycles" can be fast everywhere.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross b62d6e17a4 kernel/sched: Add an optional "all" counter for thread_usage
Tally the runtime of all non-idle threads.  Make it optional via
kconfig to avoid overhead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 4ae3250301 sched: Hook SCHED_USAGE from existing tracing hook
On older architectures, we don't have the
architecture-independent/scheduler-internal hooks (which require
USE_SWITCH) but there is a hook shared by the tracing layer we can use.

This is sort of a layering violation (stat tracking is a core feature,
tracing is supposed to be optional), but simple and lightweight.  And
eventually it will go away as these architectures migrate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 40d12c142d kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:

* Correctly synchronized: you can't race against a running thread
  (potentially on another CPU!) while querying its usage.

* Realtime results: you get the right answer always, up to timer
  precision, even if a thread has been running for a while
  uninterrupted and hasn't updated its total.

* Portable, no need for per-architecture code at all for the simple
  case. (It leverages the USE_SWITCH layer to do this, so won't work
  on older architectures)

* Faster/smaller: minimizes use of 64 bit math; lower overhead in
  thread struct (keeps the scratch "started" time in the CPU struct
  instead).  One 64 bit counter per thread and a 32 bit scratch
  register in the CPU struct.

* Standalone.  It's a core (but optional) scheduler feature, no
  dependence on para-kernel configuration like the tracing
  infrastructure.

* More precise: allows architectures to optionally call a trivial
  zero-argument/no-result cdecl function out of interrupt entry to
  avoid accounting for ISR runtime in thread totals.  No configuration
  needed here, if it's called then you get proper ISR accounting, and
  if not you don't.

For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross b11e796c36 kernel/sched: Add CONFIG_CPU_MASK_PIN_ONLY
Some SMP applications have threading designs where every thread
created is always assigned to a specific CPU, and never want to
schedule them symmetrically across CPUs under any circumstance.

In this situation, it's possible to optimize the run queue design a
bit to put a separate queue in each CPU struct instead of having a
single global one.  This is probably good for a few cycles per
scheduling event (maybe a bit more on architectures where cache
locality can be exploited) in circumstances where there is more than
one runnable thread.  It's a mild optimization, but a basically simple
one.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross b155d06712 kernel/sched: Factor out ready_q initialization
Split "init_ready_q()" into a separate function that operates on the
queue pointer and not the global kernel object.  Pure refactoring.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross 387fdd2e53 kernel/sched: Refactor/simplify run queue accessors
Similar to the previous patch, the various _priq_run_*() functions are
always passed a first argument that is the singleton system run queue
(this is because the same backend functions are used by wait queues).

Refactor into a simpler API that places the access to the run queue in
just a single spot.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross c230fb3580 kernel/sched: Simply de/queue_thread()
Pure refactoring.  For historical reasons these two functions took a
first argument (a pointer to the run queue) that was always the same.
Eliminate it.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Chen Peng1 0f63d1135c cmsis_rtos_v1: fix thread instances management.
add a bitarray into struct osThreadDef_t to indicate whether the
thread is used or not, then we can get the first available thread
by searching this array when creating a new thread, and update this
array to add a free thread when terminating a thread.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-09-09 12:01:06 -04:00
Andy Ross 0d763e0a10 cmake/compiler/xcc: sched: Support XCC inlining semantics
Cadence XCC is based off of a very old 4.2 gcc compiler, which didn't
perfectly support C99 "inline" semantics with respect to
cross-translation-unit inline linkage (which Zephyr does not use, our
inlines are static only) and declaration order.

Fix the one spot where we were calling an inline before its
ALWAYS_INLINE definition, and add a flag to suppress the warning so
CI's trying to build with XCC and -Werror don't flip out.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-08 09:28:31 -04:00
Andrew Boie f07df42d49 kernel: make k_current_get() work without syscall
We cache the current thread ID in a thread-local variable
at thread entry, and have k_current_get() return that,
eliminating system call overhead for this API.

DL: changed _current to use z_current_get() as it is
    being used during boot where TLS is not available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-30 20:16:47 -04:00
Anas Nashif 8b3f36c656 kernel: move internal headers into include/kernel
Move 2 headers that are internal to the kernel into include/kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-06-16 20:38:55 -04:00
Maksim Masalski 78ba2ec830 coding guidelines: add to function prototypes form named parameters
Function types shall be in prototype form with named parameters

Found as a coding guideline violation (MISRA R8.2) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-04 16:20:06 -05:00
Lauren Murphy 4c85b4606b kernel: k_sleep: fix return value for absolute timeout
Fixes calculation of remaining ticks returned from z_tick_sleep
so that it takes absolute timeouts into account.

Fixes #32506

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-05-26 18:11:52 -05:00
Maksim Masalski 970820e92d sched: create unique function name
In file include/kernel/thread.h in "struct _thread_base" is a member
called "_wait_q_t *pended_on"
At the same time in file kernel/sched.c is function called
"static _wait_q_t *pended_on()"

Coding scanning tool assigns violation (MISRA R5.9) that static
object reused, because thread.h is included in struct.c file.

I think we can rename function to avoid misreading in the future.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-25 19:06:21 -04:00
Andy Ross 851d14afc8 kernel/sched: Remove "cooperative scheduling only" special cases
The scheduler has historically had an API where an application can
inform the kernel that it will never create a thread that can be
preempted, and the kernel and architecture layer would use that as an
optimization hint to eliminate some code paths.

Those optimizations have dwindled to almost nothing at this point, and
they're now objectively a smaller impact than the special casing that
was required to handle the idle thread (which, obviously, must always
be preemptible).

Fix this by eliminating the idea of "cooperative only" and ensuring
that there will always be at least one preemptible priority with value
>=0.  CONFIG_NUM_PREEMPT_PRIORITIES now specifies the number of
user-accessible priorities other than the idle thread.

The only remaining workaround is that some older architectures (and
also SPARC) use the CONFIG_PREEMPT_ENABLED=n state as a hint to skip
thread switching on interrupt exit.  So detect exactly those platforms
and implement a minimal workaround in the idle loop (basically "just
call swap()") instead, with a big explanation.

Note that this also fixes a bug in one of the philosophers samples,
where it would ask for 6 cooperative priorities but then use values -7
through -2.  It was assuming the kernel would magically create a
cooperative priority for its idle thread, which wasn't correct even
before.

Fixes #34584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-24 23:38:16 -04:00
Torbjörn Leksell f17144349b Tracing: Thread tracing
Add thread tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Anas Nashif 6df4405cca doc: fix typos
Fix various typos in the docs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-30 16:03:08 -04:00
Krzysztof Chruscinski 7dcff6ecfe kernel: Move _kernel from sched to init
_kernel struct can be used when multithreading is disabled.
In that case sched.c may not be compiled.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Anas Nashif 3f4f3f6c43 kernel: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif 25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif bbbc38ba8f kernel: Make both operands of operators of same essential type category
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif 5c90ceb105 clock: rename z_tick_get_32 -> sys_clock_tick_get_32
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif 9c1efe6b4b clock: remove z_ from semi-public APIs
The clock/timer APIs are not application facing APIs, however, similar
to arch_ and a few other APIs they are available to implement drivers
and add support for new hardware and are documented and available to be
used outside of the clock/kernel subsystems.

Remove the leading z_ and provide them as clock_* APIs for someone
writing a new timer driver to use.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Lauren Murphy d88ce65463 kernel/sched: only send IPI to abort thread if hardware supports it
Wrap arch_sched_ipi() call in z_thread_abort() with ifdef checking for
hardware support of IPI.

Fixes #32723

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-03-10 14:27:33 -05:00
Spoorthy Priya Yerabolu 4118ed1d4d kernel: sched: removing dead code
Due to the recent changes to scheduler z_find_first_thread_to_unpend
& z_remove_thread_from_ready_q are not used anymore. So removing the
dead code.

fixes: #32691

Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
2021-03-05 11:05:25 +03:00
Peter Bigot 0259c864df kernel: add private scheduler APIs
These functions are a subset of proposed public APIs to clean up
several issues related to safely handling waking of threads.  They
have been made private as they interface may change, but their use
will simplify the reimplementation of the k_work functionality.

See: https://github.com/zephyrproject-rtos/zephyr/pull/29668

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-03 20:06:00 -05:00
James Harris 6543e06914 kernel: sched: avoid unnecessary lock in z_impl_k_yield
`z_impl_k_yield` unlocked sched_spinlock, only to lock it again
immediately, do a little bit more work, then unlock it again.
This causes performance issues on SMP, where `sched_spinlock`
is often fairly highly contended and cores often end up spinning
for quite a while waiting to retake the lock in `z_swap_unlocked`.

Instead directly pass the spinlock key to `z_swap` and avoid the
extra lock+unlock.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-02 14:35:21 -05:00
James Harris 2cd0f66515 kernel: sched: change to 3-way thread priority comparison
`z_is_t1_higher_prio_than_t2` was being called twice in both the
context-switch fastpath and in `z_priq_rb_lessthan`, just to
dealing with priority ties. In addition, the API was error-prone
(and too much in the fastpath to be able to assert its invarients)
- see also #32710 for a previous example of this API breaking
and returning a>b but also b>a.

Replacing this with a direct 3-way comparison `z_cmp_t1_prio_with_t2`
sidesteps most of these issues. There is still a concern that
`sgn(z_cmp_t1_prio_with_t2(a,b)) != -sgn(z_cmp_t1_prio_with_t2(b,a))`
but I don't see any way to alleviate this aside from adding an
assert to the fastpath.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-02 14:27:14 -05:00
James Harris 3330ab12d8 kernel: fix yielding between tasks with same deadline
Previously two tasks with the same deadline and priority would
always have `z_is_t1_higher_prio_than_t2` `true` in both directions.

This is logically inconsistent, and results in `k_yield` not actually
yielding between identical threads.

Signed-off-by: James Harris <james.harris@intel.com>
2021-02-27 10:25:47 +01:00
Andy Ross 6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross c0c8cb0e97 kernel: Remove abort and join implementation (UNBISECTABLE)
THIS COMMIT DELIBERATELY BREAKS BISECTABILITY FOR EASE OF REVIEW.
SKIP IF YOU LAND HERE.

Remove the existing implementatoin of k_thread_abort(),
k_thread_join(), and the attendant facilities in the thread subsystem
and idle thread that support them.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 419f37043b kernel/sched: Clamp minimum timeslice when TICKLESS
When the kernel is TICKLESS, timeouts are set as needed, and drivers
all have some minimum amount of time before which they can reliably
schedule an interrupt.  When this happens, drivers will kick the
requested interrupt out by one tick.  This means that it's not
reliably possible to get a timeout set for "one tick in the
future"[1].

And attempting to do that is dangerous anyway.  If the driver will
delay a one-tick interrupt, then code that repeatedly tries to
schedule an imminent interrupt may end up in a state where it is
constantly pushing the interrupt out into the future, and timer
interrupts stop arriving!  The timeout layer actually has protection
against this case.

Finally getting to the point: in recent changes, the timeslice layer
lost its integration with the "imminent" test in the timeout code, so
it's now able to run into this situation: very rapidly context
switching code (or rapidly arriving interrupts) will have the effect
of infinitely[2] delaying timeouts and stalling the whole timeout
subsystem.

Don't try to be fancy.  Just clamp timeslice duration such that a
slice is 2 ticks at minimum and we'll never hit the problem.  Adjust
the two tests that were explicitly requesting very short slice rates.

[1] Of course, the tradeoff is that the tick rate can be 100x higher
or more, so on balance tickless is a huge win.

[2] Actually it only lasts until a 31 bit signed rollover in the HPET
cycle count in practice.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross a202670c18 kernel/sched: Remove now-spurious SWAP_NONATOMIC workaround
Recent work to normalize use of the thread QUEUED state bit means that
we never attempt to remove unqueued threads from the low-level run
queue.  So the old workaround for SWAP_NONATOMIC that was trying to
detect this condition isn't necessary anymore.

Which is serendipitous, because it was written to encode some very
specific logic about the circumstances where _current could be
dequeued that I'd like to be able to break.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 05c468f594 kernel/sched: Make z_ready_thread() safe vs. already-running threads
This is part of the scheduler API, and was always just a synchronized
wrapper around the internal ready_thread() function.  But where the
internal users seem to be careful not to call it on threads that are
not known to be already queued or running, the general users in the
IPC code seem to be less strict.

Add a simple test to detect the case where a thread is already
running.  Right now this just loops over the array of CPUs, so is O(N)
in the CPU count even though N is never more than four for us
currently.  But this is possible without modifying data structures.  A
more scalable way to do this if we ever need to run on very parallel
systems would be to use another state bit for RUNNING, or to keep a
backpointer in the thread struct to the CPU it's running on, etc...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 6b84ab3830 kernel/sched: Adjust locking in z_swap()
Swap was originally written to use the scheduler lock just to select a
new thread, but it would be nice to be able to rely on scheduler
atomicity later in the process (in particular it would be nice if the
assignment to cpu.current could be seen atomically).  Rework the code
a bit so that swap takes the lock itself and holds it until just
before the call to arch_switch().

Note that the local interrupt mask has always been required to be held
across the swap, so extending the lock here has no effect on latency
at all on uniprocessor setups, and even on SMP only affects average
latency and not worst case.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 37866336f9 kernel/sched: Fix race between thread wakeup timeout and abort
Aborted threads will cancel their timeouts, but the timeout subsystem
isn't protected under the same lock so it's possible for a timeout to
fire just as a thread is being aborted and wake it up unexpectedly.
Check the state before blowing anything up.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andrei Emeltchenko 377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Andy Ross 4ff457113e kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.

Example:
   * CPU0 is idle, CPU1 is running thread A
   * CPU1 makes high priority thread B runnable
   * CPU1 reaches a schedule point (or returns from an interrupt) and
     decides to run thread B instead
   * CPU0 simultaneously takes its IPI and returns, selecting thread A

Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable.  So we have a deadlock, each CPU is spinning waiting for the
other!

Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation.  The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears.  I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly.  In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.

The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.

Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross 91946ef21c kernel/sched: Refactor, unify management of QUEUED state
The QUEUED state flag was managed separately from the run queue
insertion/deletion, and the logic (while AFAICT perfectly correct) was
tangled in a few places trying to keep them in sync.  Put the
management of both behind a queue_thread()/dequeue_thread() API for
clarity.  The ALWAYS_INLINE usage seems to be working to get the
compiler to condense the resulting multiple assignments.  No behavior
change.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross dd43221540 kernel/sched: Fix race with switch handle
The "null out the switch handle and put it back" code in the swap
implementation is a holdover from some defensive coding (not wanting
to break the case where we picked our current thread), but it hides a
subtle SMP race: when that field goes NULL, another CPU that may have
selected that thread (which is to say, our current thread) as its next
to run will be spinning on that to detect when the field goes
non-NULL.  So it will get the signal to move on when we revert the
value, when clearly we are still running on the stack!

In practice this was found on x86 which poisons the switch context
such that it crashes instantly.

Instead, be firm about state and always set the switch handle of a
currently running thread to NULL immediately before it starts running:
right before entering arch_switch() and symmetrically on the interrupt
exit path.

Fixes #28105

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross 1ba7414029 kernel/sched: Correct coherence assert
Some legacy spots in our IPC layer (legally) pass a NULL wait queue to
pend().  Allow this in the coherence assertion.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross 604f0f44b6 kernel/sched: Add missing lock around waitq unpend calls
The two calls to unpend a thread from a wait queue were inexplicably*
unsynchronized, as James Harris discovered.  Rework them to call the
lowest level primities so we can wrap the process inside the scheduler
lock.

Fixes #32136

* I took a brief look.  What seems to have happened here is that these
  were originally synchronized via an implicit from an outer caller
  (remember the original Uniprocessor irq_lock() API is a recursive
  lock), and they were mostly implemented in terms of middle-level
  calls that were themselves locked.  So those got ported over to the
  newer spinlock but the outer wrapper layer got forgotten.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-10 07:43:18 -05:00
Anas Nashif 39f632e7f0 kernel: fix usage of KERNEL_COHERENCE macro
Add missing CONFIG_ to KERNEL_COHERENCE usage in code.

Fixes #30380

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-03 10:42:04 -05:00
Enjia Mai 53ca709828 tests: coverage: exclude the CODE UNREACHABLE of code coverage
1. Exclude the CODE UNREACHABLE line while generating coverage report.
2. Exclude the memory domain deprecated API when calculating code
coverage.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2021-01-15 12:42:00 -05:00
Andy Ross ef626571b2 kernel/sched: Optimize deadline comparison
Needing to check the current cycle time (which involves a spinlock and
register read on most architectures) is wasteful in the scheduler
priority predicate, which is a hot path.  If we "burn" one bit of
precision (and document the rule), we can do the comparison without
knowing the current time.

2^31 cycles is still far longer than a live deadline thread in any
legitimate realtime app should ever live before being scheduled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-01-15 11:35:50 -05:00
Andy Ross e956639dd6 kernel: Remove CONFIG_LEGACY_TIMEOUT_API
This was a fallback for an API change several versions ago.  It's time
for it to go.

Fixes: #30893

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-01-14 21:33:16 -05:00
Marcin Niestroj 11cb1cf336 kernel: sched: fix legacy timeout calculation in z_tick_sleep
Ticks should be assigned directly to timeout value in case of
CONFIG_LEGACY_TIMEOUT_API=y, just as they were before referenced patch.

Fixes: 7a815d5d99 ("kernel: sched: Use k_ticks_t in z_tick_sleep")
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
2020-12-18 14:03:25 -05:00