For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Extends the CPU usage runtime stats to track current, total, peak
and average usage (as bounded by the scheduling of the idle thread).
This permits a developer to obtain more system information if desired
to tune the system.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
When the new Kconfig option CONFIG_SCHED_THREAD_USAGE_ANALYSIS
is enabled, additional timing stats are collected during context
switches. This extra information allows a developer to obtain the
the current, longest, average and total lengths of the time that
a thread has been scheduled to execute.
A developer can in turn use this information to tune their app and/or
alter their scheduling policies.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Applies the 'static' keyword to the following inlined routines:
z_priq_dumb_add()
z_priq_mq_add()
z_priq_mq_remove()
As those routines are only used in one place, they no longer have
externally visible declarations.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Clean up RUNTIME_STATS to separate the API from the individual data
backends. Use the SCHED_THREAD_USAGE tracking instead of the original
for execution_cycles. Move the kconfig for that into the runtime
stats menu, since it's part of the family now.
Also remove a lot of needless #if's around the declarations. Unused
structs and uncalled functions don't need to be explicitly hidden. An
attempt to access a non-existent field (e.g. "execution_cycles" if
that isn't configured) provides all the build time validation we need.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:
* Correctly synchronized: you can't race against a running thread
(potentially on another CPU!) while querying its usage.
* Realtime results: you get the right answer always, up to timer
precision, even if a thread has been running for a while
uninterrupted and hasn't updated its total.
* Portable, no need for per-architecture code at all for the simple
case. (It leverages the USE_SWITCH layer to do this, so won't work
on older architectures)
* Faster/smaller: minimizes use of 64 bit math; lower overhead in
thread struct (keeps the scratch "started" time in the CPU struct
instead). One 64 bit counter per thread and a 32 bit scratch
register in the CPU struct.
* Standalone. It's a core (but optional) scheduler feature, no
dependence on para-kernel configuration like the tracing
infrastructure.
* More precise: allows architectures to optionally call a trivial
zero-argument/no-result cdecl function out of interrupt entry to
avoid accounting for ISR runtime in thread totals. No configuration
needed here, if it's called then you get proper ISR accounting, and
if not you don't.
For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Threads may wait on an event object such that any events posted to
that event object may wake a waiting thread if the posting satisfies
the waiting threads' event conditions.
The configuration option CONFIG_EVENTS is used to control the inclusion
of events in a system as their use increases the size of
'struct k_thread'.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
This introduces two new macros K_THREAD_PINNED_STACK_DEFINE()
and K_THREAD_PINNED_STACK_ARRAY_DEFINE() to define thread
stack and thread stack array in pinned section.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
A stack can already be declared extern via K_KERNEL_STACK_EXTERN().
This adds similar macros for stack arrays.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Moves Z_KERNEL_STACK_LEN() higher in the header file so it is
near the other stack size calculation macros. Also, it was
actually declared after it is first used in the header.
So this seems a logical move.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.
Also extends this to gather per-thread demand paging
statistics.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a newer, much smaller and simpler implementation of abort and
join. No need to involve the idle thread. No need for a special code
path for self-abort. Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation. All work in both
calls happens under a single locked path with no unexpected
synchronization points.
This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.
Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
THIS COMMIT DELIBERATELY BREAKS BISECTABILITY FOR EASE OF REVIEW.
SKIP IF YOU LAND HERE.
Remove the existing implementatoin of k_thread_abort(),
k_thread_join(), and the attendant facilities in the thread subsystem
and idle thread that support them.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Move thread definitions to its own header to avoid redeclaration and
redefinition of types which is not allowed in some standards.
Fixes#29937
Signed-off-by: Anas Nashif <anas.nashif@intel.com>