Commit graph

2569 commits

Author SHA1 Message Date
Daniel Leung
d3b030be9b kernel: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Andy Ross
3457118540 kernel/smp: Fix races in SMP initialization
This has bitrotten a bit.  Early implementations had a synchronous
arch_start_cpu(), but then we started allowing that to be an async
operation.  But that means that CPU start now becomes surprisingly
reentrant to the arch layer (cpu 0 can get a call to start cpu 2 while
cpu 1's initialization code is still running).  That's just error
prone; we never documented the requirements cleanly (the window is
very small, but not so small to a slow simulator!).

Add an extra flag so we don't issue the next start until the last is
out of the arch layer and running in smp_init_top().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-11 11:53:53 +01:00
Peter Mitsis
d1353a4584 kernel: pipes: add pipe flush routines
Adds two routines to flush pipe objects:
   k_pipe_flush()
     - This routine flushes the entire pipe. That includes both
     the pipe's buffer and all pended writers. It is equivalent
     to reading everything into a giant temporary buffer which
     is then discarded.
   k_pipe_buffer_flush()
     - This routine flushes only the pipe's buffer (if it exists).
     It is equivalent to reading a maximum of "buffer size" bytes
     into a temporary buffer which is then discarded.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis
da7fbad057 kernel: pipes: fix race condition
Fixes a race condition in the k_pipe_cleanup() routine by adding
a spinlock. Additionally, internal counters are now reset after
freeing the buffer as the pipe has now become a bufferless pipe.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis
7aa874a251 kernel: pipes: refactor
Minor refactoring done to the pipes codebase to remove unnecessary
wrapper functions.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis
434cd1505e kernel: remove NUM_PIPE_ASYNC_MSGS
Removes references to NUM_PIPE_ASYNC_MSGS as asynchronous pipe support
is not supported.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis
ecd3164671 kernel: pipes: fix build warnings
Resolves void pointer arithmetic build warnings in k_pipe_put() by
casting the pointer to a uint8_t pointer.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis
0ebd6c7f26 kernel/sched: enable/disable runtime stats
Adds support to enable/disable both thread and cpu runtime
stats.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis
4eb1dd02cc kernel: extend CPU runtime stats
Extends the CPU usage runtime stats to track current, total, peak
and average usage (as bounded by the scheduling of the idle thread).
This permits a developer to obtain more system information if desired
to tune the system.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis
572f1db56a kernel: extend thread runtime stats
When the new Kconfig option CONFIG_SCHED_THREAD_USAGE_ANALYSIS
is enabled, additional timing stats are collected during context
switches. This extra information allows a developer to obtain the
the current, longest, average and total lengths of the time that
a thread has been scheduled to execute.

A developer can in turn use this information to tune their app and/or
alter their scheduling policies.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis
5deaffb2ee kernel: update z_sched_thread_usage()
This commit does two things to the z_sched_thread_usage(). First,
it updates the API so that it accepts a pointer to the runtime
stats instead of simply returning the usage cycles. This gives it
the flexibility to retrieve additional statistics in the future.

Second, the runtime stats are only updated if the specified thread
is the current thread running on the current core.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis
82c3d531a6 kernel: move thread usage routines to own file
Moves the CONFIG_SCHED_THREAD_USAGE block of code out of sched.c
into its own file. Not only do they employ their own private
spin lock, but it is expected that additional usage routines will be
added in the future.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis
48f516469a kernel: Fix typo in macro name
Fixes a typo in the macro ARCH_DYMANIC_OBJ_K_THREAD_ALIGNMENT
so that DYMANIC becomes DYNAMIC.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-07 11:20:46 -05:00
Gerard Marull-Paretas
0b0435a96c device: deprecate (z_)device_usable_check
The functionality provided by device_usable_check is already provided by
device_is_ready. The (z_)device_usable_check APIs have been
re-implemented using the (z_)device_is_ready APIs and have been marked
as deprecated.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-01-07 10:41:23 -05:00
Gerard Marull-Paretas
47bfb6fb4c device: implement device_is_ready as syscall
Instead of using device_usable_check() syscall, implement a new syscall
for device_is_ready that uses z_device_is_ready underneath.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-01-07 10:41:23 -05:00
Gerard Marull-Paretas
b2b8fdf539 device: s/z_device_ready/z_device_is_ready
Rename z_device_ready to z_device_is_ready. Function name suggests a
boolean result this way, in line with other functions (e.g.
device_is_ready).

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-01-07 10:41:23 -05:00
Berend Ozceri
b208e5811e kernel/swap: Initialize dummy thread's resource pool
The resource pool of the short-lived dummy thread "stub" may be
inherited by other threads created during system initialization. This
commit initializes this resource pool to NULL or the system pool to
ensure that a well-defined resource pool propagates to other threads
that inherit it from the dummy thread.

Fixes #41482.

Signed-off-by: Berend Ozceri <berend@recogni.com>
2022-01-06 11:57:18 -05:00
Jeremy Bettis
fb1c36f7fd build: hide z_priq_mq_add/z_priq_mq_remove
Move z_priq_mq_add and z_priq_mq_remove into #ifdef CONFIG_SCHED_MULTIQ
block, because they are only used with that config.

Signed-off-by: Jeremy Bettis <jbettis@google.com>
2022-01-04 11:52:10 -05:00
Daniel Leung
b6dd960be8 kernel: userspace: fix dynamic kernel object alignment
Previous commit 55350a93e9 fixing
address-of-packed-mem warnings uncovered an issue with
the alignment of dynamic kernel objects. On 64-bit platforms,
the alignment is 16 bytes instead of 4/8 bytes (as in pointer,
void *). This changes the function of mapping between kernel
object types and alignments to use the dynamic object struct
as basis for alignment instead of simply using pointers.

This also uncomments the assertion added in the previous commit
55350a93e9 so that we can keep
an eye on the alignment in the future. Note that the assertion
is moved after checking if the incoming kernel object is
dynamically allocated. Static kernel objects are not subjected
to this alignment requirement.

Fixes #41062

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-12-20 12:48:58 -05:00
Peter Mitsis
f8b76f3b03 kernel: add 'static' keyword to select routines
Applies the 'static' keyword to the following inlined routines:
    z_priq_dumb_add()
    z_priq_mq_add()
    z_priq_mq_remove()
As those routines are only used in one place, they no longer have
externally visible declarations.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2021-12-13 17:21:58 -05:00
Lixin Guo
d4826d874e kernel: work: remove unused if statement
Condition of work == NULL is checked before, so there is no need to
check it again.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
2021-12-13 17:20:56 -05:00
Jeremy Bettis
1e0a36c655 build: Remove unused functions
Removed unused functions, or moved inside #ifdefs.

This allows using -Werror=unused-function on the clang compiler. Tested
by building the ChromeOS EC on all supported platforms with
-Werror=unused-functions.

Signed-off-by: Jeremy Bettis <jbettis@google.com>
2021-12-13 15:49:08 -05:00
Carles Cufi
55350a93e9 kernel: userspace: Fix address-of-packed-mem warning
The warning below appears once -Waddress-of-packed-mem is enabled:

/home/carles/src/zephyr/zephyr/kernel/userspace.c: In function
'unref_check':
/home/carles/src/zephyr/zephyr/kernel/userspace.c:471:28: warning:
converting a packed 'struct z_object' pointer (alignment 4) to a 'struct
dyn_obj' pointer (alignment 16) may result in an unaligned pointer value
[-Waddress-of-packed-mem
ber]
  471 |    CONTAINER_OF(ko, struct dyn_obj, kobj);

To avoid the warning, use an intermediate void * variable.

More info in #16587.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-12-10 14:08:59 +01:00
Carles Cufi
2000cdf6dc kernel: mmu: Fix access to unpacked member inside packed struct
The following warning is triggered by GCC when
-Waddress-of-packed-member is enabled:

/home/carles/src/zephyr/zephyr/kernel/mmu.c: In function
'free_page_frame_list_put':
/home/carles/src/zephyr/zephyr/kernel/mmu.c:383:42: warning: taking
address of packed member of 'struct z_page_frame' may result in an
unaligned pointer value [-Waddress-of-packed-member]
  383 |  sys_slist_append(&free_page_frame_list, &pf->node);

This is due to the fact that sys_snode_t node is an unpacked structure
inside a packed z_page_frame structure, so that the alignment of the
former cannot be ensured if placed inside the latter.

Given that alignment of z_page_frame is ensured by the code, silence the
compiler by going through an intermediate variable.

More info in #16587.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-12-10 14:08:59 +01:00
Daniel Leung
650a629b08 debug: gdbstub: remove start argument from z_gdb_main_loop()
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Flavio Ceolin
623ed5ae29 pm: Remove invalid comments
Remove comments referencing an old function / behavior.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-29 19:37:55 -05:00
Flavio Ceolin
72262475f4 pm: idle: Remove not necessary branch
Remove a constant branch calling directly pm_save_idle.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-29 19:37:55 -05:00
NingX Zhao
5287ce8ff0 poll: modify the function z_vrfy_k_poll
Removing the 'U' to avoid the type of num_events changed.
And make sure it is meaningful Z_SYSCALL_VERIFY micro.

Fixed #40614

Signed-off-by: NingX Zhao <ningx.zhao@intel.com>
2021-11-25 18:23:51 -05:00
Daniel Leung
fff9180614 kernel: mmu: make virtual region bitmap static
The virtual region bitmap bitarray struct is only used within
the source file, so it can be declared static.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-24 14:22:23 -05:00
Jordan Yates
f515aa0cf0 device: supported devices visitor API
Adds an API to query and visit supported devices. Follows the example
set by the required devices API.

Implements #37793.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2021-11-23 12:17:14 +01:00
Daniel Leung
a60317167b kernel: mem_domain: k_mem_domain_add_thread to return errors
This updates k_mem_domain_add_thread() to return errors so
the application has a chance to recover.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Daniel Leung
bb595a85f1 kernel: mem_domain: add/remove partition funcs to return errors
This changes both k_mem_domain_add_partition() and
k_mem_domain_remove_partition() to return errors instead of
asserting when errors are encountered. This gives the application
chance to recover.

The arch_mem_domain_parition_add()/_remove() will be modified
later together with all the other arch_mem_domain_*() changes
since the architecture code for partition addition and removal
functions usually cannot be separately changed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Daniel Leung
fb91ce2e21 kernel: mem_domain: init function to return error values
This changes k_mem_domain_init() to return error values
instead of asserting when errors are encountered.
This gives applications a chance to recover if needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Krzysztof Chruscinski
c0808e3f59 logging: Minimal mode configuration cleanup
Remove LOG_MINIMAL kconfig option which was confusing
since LOG_MODE_MINIMAL existed. LOG_MINIMAL was used to
force minimal mode but because of invalid dependencies
it was leading to issues.

Refactored code to use LOG_MODE_MINIMAL everywhere and
renamed LOG_MINIMAL to LOG_DEFAULT_MINIMAL which has impact
on defualt logging mode (which still can be later changed
in conf file or in menuconfig).

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-11-20 11:58:40 -05:00
Flavio Ceolin
7dd4297214 pm: Remove unused parameter
The number of ticks on z_pm_save_idle_exit is not used and there is
no need to have it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-17 11:15:49 -05:00
Torbjörn Leksell
86d8b36955 Tracing: k_free tracing hook heap reference added
Added heap reference parameter to k_free tracing
hook to allow tracing of the pointer which was
passed as a parameter to a k_free call.
As part of this update the defines
(for this hook) in the various tracing formats
was also updated.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-11-16 09:45:01 -05:00
Jordan Yates
44b8a0d199 scripts: gen_handles.py: remove size restrictions
With `gen_handles.py` now running on the first pre-built image,
`zephyr_pre0.elf` there is no requirement for the device handle arrays
to remain the same size after processing.

Remove the padding generated in `gen_handles.py`, as well as the
temporary option `CONFIG_DEVICE_HANDLE_PADDING` which was added to work
around this issue.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2021-11-16 10:41:59 +01:00
Lixin Guo
6aa783ed6a kernel: mmu: exclude some funcs from coverage
page_frame_dump() and z_page_frames_dump() are used for
debug print, so there is no need to cover those funcs.
__weak function is also excluded, every test overrides it.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
2021-11-12 11:57:50 -05:00
Andy Ross
410f911018 kernel/sched: Separate idle from app thread stats in THREAD_USAGE
It turns out that we have a sample (though not a test) that really
does want to use "k_thread_runtime_stats_all_get()" to measure system
uptime.

Instead of breaking this needlessly, separate the accounting for idle
and non-idle threads.  The legacy API can report their sum, and the
more useful value is available via the kernel struct for future
analysis.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
f169c5bc13 kernel: Swap RUNTIME_STATS implementation
Clean up RUNTIME_STATS to separate the API from the individual data
backends.  Use the SCHED_THREAD_USAGE tracking instead of the original
for execution_cycles.  Move the kconfig for that into the runtime
stats menu, since it's part of the family now.

Also remove a lot of needless #if's around the declarations.  Unused
structs and uncalled functions don't need to be explicitly hidden.  An
attempt to access a non-existent field (e.g. "execution_cycles" if
that isn't configured) provides all the build time validation we need.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
52351458f4 kernel/sched: Add timing.h support to thread_usage
The runtime stats feature has always supported this, so use the same
kconfig to indirect the timing source in the same way.

(Personally I'm not a fan of the "timing" API, which really doesn't do
anything that the existing core "cycles" API does not except add a
bunch of code due to the separate implementation of frequency
management and conversion routines.  It comes from an era where
"cycles" were fixed to a MHz frequency clock on platforms like x86 yet
we had benchmarks that wanted to use the TSC.  Those days are behind
us and "cycles" can be fast everywhere.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
b62d6e17a4 kernel/sched: Add an optional "all" counter for thread_usage
Tally the runtime of all non-idle threads.  Make it optional via
kconfig to avoid overhead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
4ae3250301 sched: Hook SCHED_USAGE from existing tracing hook
On older architectures, we don't have the
architecture-independent/scheduler-internal hooks (which require
USE_SWITCH) but there is a hook shared by the tracing layer we can use.

This is sort of a layering violation (stat tracking is a core feature,
tracing is supposed to be optional), but simple and lightweight.  And
eventually it will go away as these architectures migrate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross
40d12c142d kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:

* Correctly synchronized: you can't race against a running thread
  (potentially on another CPU!) while querying its usage.

* Realtime results: you get the right answer always, up to timer
  precision, even if a thread has been running for a while
  uninterrupted and hasn't updated its total.

* Portable, no need for per-architecture code at all for the simple
  case. (It leverages the USE_SWITCH layer to do this, so won't work
  on older architectures)

* Faster/smaller: minimizes use of 64 bit math; lower overhead in
  thread struct (keeps the scratch "started" time in the CPU struct
  instead).  One 64 bit counter per thread and a 32 bit scratch
  register in the CPU struct.

* Standalone.  It's a core (but optional) scheduler feature, no
  dependence on para-kernel configuration like the tracing
  infrastructure.

* More precise: allows architectures to optionally call a trivial
  zero-argument/no-result cdecl function out of interrupt entry to
  avoid accounting for ISR runtime in thread totals.  No configuration
  needed here, if it's called then you get proper ISR accounting, and
  if not you don't.

For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Krzysztof Chruscinski
8979fbc906 kernel: timer: Call user handler without spinlock
Add spinlock unlocking before calling timer expiration
handler. Locking was introduced by dde3d6c.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-11-08 11:05:49 -05:00
Flavio Ceolin
9444480c7b pm: Better return type for pm_system_suspend
Instead of returning PM_STATE_ACTIVE for when the cpu didn't enter a
low power state and a different state when it entered, but has
already left the state and is active again, it changes
pm_system_suspend to return true when the cpu has entered a low power
state and false otherwise.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-06 10:21:53 -04:00
Andy Ross
7dee7a6139 kernel/sched: Fix race with thread return values
There was a brief (but seen in practice on real apps on real
hardware!) race with the switch-based z_swap() implementation.  The
thread return value was being initialized to -EAGAIN after the
enclosing lock had been released.  But that lock is supposed to be
atomic with the thread suspend.

This opened a window for another racing thread to come by and "wake
up" our pending thread (which is fine on its own), set its return
value (e.g. to 0 for success) and then have that value clobbered by
the thread continuing to suspend itself outside the lock.

Melodramatic aside: I continue to hate this
arch_thread_return_value_set() API; it needs to die.  At best it's a
mild optimization on a handful of architectures (e.g. x86 implements
it by writing to the EAX register save slot in the context block).
Asynchronous APIs are almost always worse than synchronous ones, and
in this case it's an async operation that races against literal
context switch code that can't use traditional locking strategies.

Fixes #39575

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-10-25 12:31:06 +02:00
Chris Reed
f8f3c00d07 kernel: thread.c: remove unused THREAD_COOKIE macro.
This macro was not reference anywhere.

Signed-off-by: Chris Reed <chris.reed@arm.com>
2021-10-22 18:08:56 -04:00
Peter Mitsis
ae394bff7c kernel: add support for event objects
Threads may wait on an event object such that any events posted to
that event object may wake a waiting thread if the posting satisfies
the waiting threads' event conditions.

The configuration option CONFIG_EVENTS is used to control the inclusion
of events in a system as their use increases the size of
'struct k_thread'.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2021-10-16 06:27:10 -04:00
Chris Reed
ab4d69baf3 kernel: work_q: use flags_get() in work_delayable_busy_get_locked().
The k_work::flags field is not an atomic_t and would cause
-Wpointer-sign warning on some compilers. This function was the only
one in work.c to use atomic_get() so there is no benefit to atomicity.

Signed-off-by: Chris Reed <chris.reed@arm.com>
2021-10-16 06:23:46 -04:00