Commit graph

3,255 commits

Author SHA1 Message Date
Alberto Escolar Piedras
9eeb78d86d COVERAGE: Fix COVERAGE_GCOV dependencies
CONFIG_COVERAGE has been incorrectly used to
change other kconfig options (stack sizes, etc)
code defaults, as well as some samples behaviour,
which should not have dependend on it.

Instead those should have depended on COVERAGE_GCOV,
which, being the one which adds special code and
temporary RAM storage for embedded targets,
require changes to many features.

When building for the native targets, all this was
unnecessary.

=> Fix the dependency.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2023-08-24 15:36:31 +02:00
Andrei Emeltchenko
9551f2708b kernel: timeout: Remove unneeded assignment
Fix warnings with value stored never read.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2023-08-22 10:46:41 +01:00
Daniel Leung
e38fc6de8a cmake: enable -Wshadow partially for in-tree code
This enables -Wshadow to warn about shadow variables on
in tree code under arch/, boards/, drivers/, kernel/,
lib/, soc/, and subsys/.

Note that this does not enable it globally because
out-of-tree modules will probably take some time to fix
(or not at all depending on the project), and it would be
great to avoid introduction of any new shadow variables
in the meantime.

Also note that this tries to be done in a minimally
invasive way so it is easy to revert when we enable
-Wshadow globally. Source files under modules/, samples/
and tests/ are currently excluded because there does not
seem to be a trivial way to add -Wshadow there without
going through all CMakeLists.txt to add the option
(as there are 1000+ files to change).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-22 11:39:58 +02:00
Jamie McCrae
77a00f1352 kernel: banner: Allow for customising version
This allows for further (out of tree) customisation of the boot
banner version string when devices boot.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2023-08-21 10:09:46 +02:00
Grant Ramsay
45701e696a kernel: sched: Disable FPU context when thread ends
When `CONFIG_FPU_SHARING` is enabled each `k_thread` struct has a saved
floating point context (`saved_fp_context`). During a context switch, the
current FPU owner's (`_current_cpu->arch.fpu_owner`) registers are saved
to its `saved_fp_context`, and the destination threads FPU registers are
loaded from its `saved_fp_context`.

When a thread ends, it does not release ownership of the FPU
(`_current_cpu->arch.fpu_owner`). This is problematic if the `k_thread`
struct was allocated on the stack. The next context switch will save the
FPU registers into `k_thread -> saved_fp_context` which may now be out of
scope. This will likely (but not always) result in a crash.

Adding `arch_float_disable(thread);` when a thread ends disables
preservation of floating point context information, fixing this issue

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
2023-08-16 17:05:25 +02:00
Daniel Leung
b8e0de2ad0 kernel: mmu: fix bitmap set and clear under direct map
When CONFIG_KERNEL_DIRECT_MAP enabled, the region to be mapped
or unmapped can be outside of the virtual memory space, wholly
within it, or overlap partially. Additional processing is
needed to make sure we only manipulate the bits within
the bitmap, in other words, only the pages represented by
the bitmap.

Fixes #59549

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-15 16:30:55 -04:00
Daniel Leung
9c0ff33e04 kernel: rename shadow variables
Renames	shadow variables found by -Wshadow.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-10 08:14:12 +00:00
Flavio Ceolin
d16c5b9048 kernel: canaries: Allow using TLS to store it
Add new option to use thread local storage for stack
canaries. This makes harder to find the canaries location
and value. This is made optional because there is
a performance and size penalty when using it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-08-08 19:08:04 -04:00
Chaitanya Tata
5c7c099891 kernel: Add support to override banner
This is useful for Zephyr customers to put their custom banners.

Signed-off-by: Chaitanya Tata <Chaitanya.Tata@nordicsemi.no>
2023-08-03 18:05:00 -04:00
Vadim Shakirov
73944c6157 kernel/sched: fix thread selection when ABORTING + PENDING
In commit d537267f, the check on thread abortion was moved from next_up
to z_get_next_switch_handle. However, next_up is also called from
z_swap_next_thread, so the check on thread abortion is now missing there.
This sometimes caused the thread to be stuck in ABORTING + PENDING state
during the test_smp_switch_torture in test/kernel/smp

To avoid such cases in the future, it is worth leaving the check in next_up

Signed-off-by: Vadim Shakirov <vadim.shakirov@syntacore.com>
2023-08-01 11:59:42 +02:00
Peter Mitsis
bd5839ec9e kernel: Fix wrap-around check in kernel/mmu.h
Fixes the buffer wrap-around check so that it will not be ignored
by the GNU C compiler.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2023-08-01 09:51:33 +02:00
Nicolas Pitre
77b7eb1199 kernel/kheap: move to timepoint API
Remove sys_clock_timeout_end_calc() usage.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-07-25 09:12:26 +02:00
Nicolas Pitre
52e2f83185 kernel/timeout: introduce the timepoint API
This is meant as a substitute for sys_clock_timeout_end_calc()

Current sys_clock_timeout_end_calc() usage opens up many bug
possibilities due to the actual timeout evaluation's open-coded nature.

Issue ##50611 is one example.

- Some users store the returned value in a signed variable, others in
  an unsigned one, making the comparison with UINT64_MAX (corresponding
  to K_FOREVER) wrong in the signed case.

- Some users compute the difference and store that in a signed variable
  to compare against 0 which still doesn't work with K_FOREVER. And when
  this difference is used as a timeout argument then the K_FOREVER
  nature of the timeout is lost.

- Some users complexify their code by special-casing K_NO_WAIT and
  K_FOREVER inline which is bad for both code readability and binary
  size.

Let's introduce a better abstraction to deal with absolute timepoints
with an opaque type to be used with a well-defined API.
The word "timeout" was avoided in the naming on purpose as the timeout
namespace is quite crowded already and it is preferable to make a
distinction between relative time periods (timeouts) and absolute time
values (timepoints).

A few stacks are also adjusted as they were too tight on X86.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-07-25 09:12:26 +02:00
Christopher Friedt
d7119b889f kernel: dynamic: declare dynamic stubs when disabled
With some of the recent work to disable unnecessary system
calls, there is a scenario where `z_impl_k_thread_stack_free()`
is not defined and an undefined symbol error occurs.

Safety was very concerned that dynamic thread stack code might
touch other code that does not malloc, so add a separate file
for the stack alloc and free stubs.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-07-24 12:59:43 -04:00
Christopher Friedt
dcdebb616a kernel: dynamic: remove unnecessary size assignment
Previously, the kernel stack size was adjusted for no apparent
reason.

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-07-24 12:59:43 -04:00
Flavio Ceolin
ed8355ad3f kernel: userspace: Fix memory leak
Fix memory leak on dynamic object allocation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-07-20 16:10:32 +00:00
Flavio Ceolin
cbbe6d2ab7 kernel: userspace: Simplify dinamyc objects
There is not need to have two types to represent dynamic objects.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-07-20 16:10:32 +00:00
Nicolas Pitre
13a6c45452 kernel: crude k_busy_wait() implementation
This allows for builds with CONFIG_SYS_CLOCK_EXISTS=n in which case
busy waits are achieved with a crude CPU loop. If ever accuracy is
needed even with such a configuration then implementing arch_busy_wait()
should be considered.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-07-19 21:42:41 -04:00
Nicolas Pitre
b157031468 kernel: split k_busy_wait() out of timeout.c
This will allow for builds with CONFIG_SYS_CLOCK_EXISTS=n. For now this
is only the code move.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-07-19 21:42:41 -04:00
Flavio Ceolin
2b1106a407 kernel: userspace: Use only list for dynamic objs
Since the rbtree is using as list because we no longer
can assume that the object pointer is the address of the
data field in the dynamic object struct, lets just use
the already existent dlist for tracking dynamic kernel
objects.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-07-17 16:56:01 -04:00
Flavio Ceolin
4feb182f12 kernel: dynamic: Fix stack allocation logic
Fix the preference allocation logic. If pool is preferred but POOL_SIZE
is 0 or pool allocation fails, it fallbacks to heap allocation if it
is enabled.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-07-17 16:56:01 -04:00
Flavio Ceolin
3b7e0b672e kernel: userspace: Dynamic thread stack object
Add support for dynamic thread stack objects. A new container
for this kernel object was added to avoid its alignment constraint
to all dynamic objects.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-07-17 16:56:01 -04:00
Flavio Ceolin
fc6d9eeb3e kernel: dynamic: Support usermode thread stack
Allocate kernel object for userspace thread stack.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-07-17 16:56:01 -04:00
Flavio Ceolin
67e66e4807 kernel: userspace: Add k_object_alloc_size
Add a new API to dynamically allocate kernel objects that allow
passing an arbitrary size. This new API allows to allocate dynamic
thread stack.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-07-17 16:56:01 -04:00
Christopher Friedt
7b1b2576ac kernel: support dynamic thread stack allocation
Add support for dynamic thread stack allocation

Signed-off-by: Christopher Friedt <cfriedt@meta.com>
2023-07-13 17:16:32 -04:00
Florian Grandel
e256b7d244 kernel: spinlock: LOCKED -> K_SPINLOCK
Let the kernel use the new K_SPINLOCK macro and remove the alias.

Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
2023-07-10 09:27:21 +02:00
Florian Grandel
816d09c453 kernel: spinlock: expose LOCKED macro as public API
While the LOCKED pattern is universally useful it can be misused. This
change therefore exposes the LOCKED pattern with extensive usage
documentation to reduce the risk of abuse or unintended deadlock.

Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
2023-07-10 09:27:21 +02:00
Jordan Yates
4ce1f03aa9 kernel: event modification functions return previous value
Update the return value of functions that modify the internal event
state from `void` to `uint32_t`, so that calling code can determine
whether the event was already in a given state, or if the call modified
it.

This simplifies the usage of `struct k_event` as an alternative to
`atomic_t` that users can block on.

Implements #57216

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2023-07-07 09:24:25 +02:00
Florian Grandel
5fa5534afc doc: kernel: clocks: define "current time"
Scheduling relative timeouts from within timer callbacks (=sys clock ISR
context) differs from scheduling relative timeouts from an application
context.

This change documents and explains the rationale of this distinction.

Signed-off-by: Florian Grandel <fgrandel@code-for-humans.de>
2023-06-30 16:07:26 +02:00
Evgeniy Paltsev
16b8191be0 SMP: fix build failure if SMP=y and SYS_CLOCK_EXISTS=n
Fix build failure for the SMP configurations without sysclock.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2023-06-22 06:17:27 -04:00
Gerard Marull-Paretas
48b201cc53 device: make device dependencies optional
Device dependencies are not always required, so make them optional via
CONFIG_DEVICE_DEPS. When enabled, the gen_device_deps script will run so
that dependencies are collected and part of the final image. Related
APIs will be also made available. Since device dependencies are used in
just a few places (power domains), disable the feature by default. When
not enabled, a second linking pass will not be required.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-06-21 09:32:05 +02:00
Gerard Marull-Paretas
1ebd76ed51 pm: add prompt to DEVICE_DEPS_DYNAMIC
The option can now be set by projects. This change will also allow to
make it dependent on a future CONFIG_DEVICE_DEPS option.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-06-21 09:32:05 +02:00
Gerard Marull-Paretas
319fbe57e1 device: s/HAS_DYNAMIC_DEVICE_HANDLES/DEVICE_DEPS_DYMAMIC
Rename the Kconfig option to be in line with recent renamings in device
handles/dependencies.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-06-21 09:32:05 +02:00
Gerard Marull-Paretas
5982d83e2a device: s/struct device.handles/struct device.deps
Rename struct device `handles` member to `deps`, in line with previous
renamings in the device API.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-06-21 09:32:05 +02:00
Daniel Leung
b93ccd22c7 kernel: syscalls: use zephyr_syscall_header
This adds a few line use zephyr_syscall_header() to include
headers containing syscall function prototypes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-06-17 07:57:45 -04:00
Flavio Ceolin
4f29930e4c pm: Fix cpus active count
Only set a cpu as active (on pm subsystem) when the cpu is effectively
initialized. We cannot assume on pm subsystem that all cpus were
initialized since when the option CONFIG_SMP_BOOT_DELAY is used cpus are
initialized on demand by the application.

Note that once cpus are properly initialized the subystem is able to track
their status.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-06-01 10:05:31 +02:00
Andy Ross
a08e23f68e kernel/sched: Fix SMP must-wait-for-switch conditions in abort/join
As discovered by Carlo Caione, the k_thread_join code had a case where
it detected it had been called on a thread already marked _THREAD_DEAD
and exited early.  That's not sufficient.  The thread state is mutated
from the thread itself on its exit path.  It may still be running!

Just like the code in z_swap(), we need to spin waiting on the other
CPU to write the switch handle before knowing it's safe to return,
otherwise the calling context might (and did) do something like
immediately k_thread_create() a new thread in the "dead" thread's
struct while it was still running on the other core.

There was also a similar case in k_thread_abort() which had the same
issue: it needs to spin waiting on the other CPU to kill the thread
via the same mechanism.

Fixes #58116

Originally-by: Carlo Caione <ccaione@baylibre.com>
Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Andy Ross
c3046f417a kernel/sched: Use new barrier and spin APIs
The switch_handle field in the thread struct is used as an atomic flag
between CPUs in SMP, and has been known for a long time to technically
require memory barriers for correct operation.  We have an API for
that now, so put them in:

* The code immediately before arch_switch() needs a write barrier to
  ensure that thread state written by the scheduler is seen to happen
  before the outgoing thread is flagged with a valid switch handle.

* The loop in z_sched_switch_spin() needs a read barrier at the end,
  to make sure the calling context doesn't load state from before the
  other CPU stored the switch handle.

Also, that same spot in switch_spin was spinning with interrupts held,
which means it needs a call to arch_spin_relax() to avoid a FPU state
deadlock on some architectures.

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Andy Ross
b89e427bd6 kernel/sched: Rename/redocument wait_for_switch() -> z_sched_switch_spin()
This trick turns out also to be needed by the abort/join code.
Promote it to a more formal-looking internal API and clean up the
documentation to (hopefully) clarify the exact behavior and better
explain the need.

This is one of the more... enchanted bits of the scheduler, and while
the trick is IMHO pretty clean, it remains a big SMP footgun.

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-26 17:09:35 -04:00
Hou Zhiqiang
41520e8b5b kernel: mmu: add direct-map support in z_phys_map()
Many RTOS applications assume the virtual and physical address
is 1:1 mapping, so add the 1:1 mapping support in z_phys_map()
to easy adapt these applications.

Signed-off-by: Hou Zhiqiang <Zhiqiang.Hou@nxp.com>
2023-05-26 13:50:35 -04:00
Nicolas Pitre
e2d0d1a801 kernel: allow for arch specific processing within busy loops
Give architectures that need it the ability to perform special checks
while e.g. waiting for a spinlock to become available.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-05-25 08:25:11 +00:00
Carlo Caione
74a942e673 barriers: Introduce barrier operations
Introduce a new API for barrier operations starting with a general
skeleton and the implementation for barrier_data_memory_fence_full().

Select a built-in or an arch-based implementation according to new
Kconfig symbols CONFIG_BARRIER_OPERATIONS_BUILTIN and
CONFIG_BARRIER_OPERATIONS_ARCH.

The built-in implementation falls back on the compiler built-in
function using __ATOMIC_SEQ_CST as it is done for the atomic APIs
already.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2023-05-24 13:13:57 -04:00
Flavio Ceolin
df4be07b26 kernel: mmu: Fix Xtensa memory alignment issue
z_page_frame can't be packed on Xtensa due memory alignment
constraints. When this is struct is packed it is 5 bytes long it will
cause an memory alignment problem on Xtensa.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-05-23 08:54:29 +02:00
Nicolas Pitre
c1afa5c85e kernel/atomic_c.c: prevent usage in SMP configs
The C version of atomic_cas() uses k_smp_lock() ... which uses
atomic_cas().

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2023-05-23 08:53:39 +02:00
Gerard Marull-Paretas
dacb3dbfeb iterable_sections: move to specific header
Until now iterable sections APIs have been part of the toolchain
(common) headers. They are not strictly related to a toolchain, they
just rely on linker providing support for sections. Most files relied on
indirect includes to access the API, now, it is included as needed.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-05-22 10:42:30 +02:00
Andy Ross
d537267fc3 kernel/sched: Fix thread selection misordering with aborted threads
When a running thread gets aborted asynchronously (this only happens
in SMP contexts, obviously) it gets flagged "aborting", but the actual
abort needs to happen in the thread's own context.  For convenience,
this was done in the next_up() routine that selects the next thread to
run at interrupt exit time.

But this check was being done AFTER the next candidate thread was
selected from the run queue.  Thread abort can wake up threads blocked
in k_thread_join(), and therefore these weren't seen as runable
threads, even if they should have been.

Executive summary: if you killed a thread running on another CPU, and
there was another thread joined to the killed thread that should have
run on that CPU, it wouldn't (until it received an interrupt or
otherwise reached a schedule point).

Move the abort check above the run queue inspection and into the
end-of-interrupt processing in z_get_next_switch_handle() (so it's
actually a mild performance boost as it's no longer part of the
cooperative context switch path).  Simple fix, subtle bug.

Fixes #58040

Signed-off-by: Andy Ross <andyross@google.com>
2023-05-22 08:06:49 +00:00
Qipeng Zha
fa973d1b7b kernel: pin _kernel variable in case of paging
Exception handler(arch/x86/core/ia32/excstub.S) may access
_kernel variable, it will lead to failure when enabled paging,
so make this critical variable pinned.

Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
2023-05-16 11:41:11 -04:00
Jaroslaw Stelter
9c0dd7e3be intel_adsp: ace20_lnl: Change LNL core count to 5
The ACE 2.0 LNL platform has 5 HIFI4 cores. Change number
of cores to enable 5th core on the platform.

Signed-off-by: Jaroslaw Stelter <Jaroslaw.Stelter@intel.com>
2023-05-15 08:00:11 -04:00
Armin Brauns
0bc342fbbd kernel: fix buffer overflow from incorrect K_MSGQ_DEFINE definition
Without these parentheses, specifying a q_max_msgs of e.g.
`MY_DEFAULT_QUEUESIZE+1` would result in a buffer of size
(1 element + MY_DEFAULT_QUEUESIZE bytes).

This would then lead to an unbounded buffer overflow because the queue
never reaches the exact (offset by MY_DEFAULT_QUEUESIZE bytes)
`buffer_end` and just keeps writing.

Additionally, add asserts to make sure this can't happen again.

Signed-off-by: Armin Brauns <armin.brauns@embedded-solutions.at>
2023-05-12 13:39:10 -04:00
Gerard Marull-Paretas
d0e58ad0a6 device: use iterable sections
Use iterable sections to handle devices list. This simplifies devices
implementation by using standard APIs.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2023-05-12 12:01:10 +02:00