Commit graph

3183 commits

Author SHA1 Message Date
Anas Nashif
7b2ccf4dfe kernel: increase main stack size for ztests on nios2
ztest now needs to more main stack size.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-06-12 12:47:18 -04:00
Fabio Baltieri
328365989f kernel: mem_slab: only define slab_ptr_is_good with assert enabled
Add a __ASSERT_ON guard around slab_ptr_is_good, as that is only used in
assertions and leaving it on seems to generate a build warning with some
clang versions:

kernel/mem_slab.c:207:20: error: unused function 'slab_ptr_is_good'
  207 | static inline bool slab_ptr_is_good(struct k_mem_slab *slab,...
      |                    ^~~~~~~~~~~~~~~~

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2024-06-10 17:46:10 +01:00
Krzysztof Sychla
87946a8996 kernel: banner: fix disabling boot banner
When the CONFIG_BOOT_BANNER flag is set to "n", but CONFIG_BOOT_DELAY
is enabled, there is a delay message printed at boot time.
This allows for the whole boot banner to be disabled.

Signed-off-by: Krzysztof Sychla <ksychla@antmicro.com>
2024-06-10 00:59:10 -07:00
Nicolas Pitre
5f2620fece kernel: mem_slab: extend slab pointer validation
Abstract slab pointer validation and apply it to block dequeue during
allocation in addition to the existing block freeing. This should help
catching some buffer overflow induced corruptions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-07 21:43:28 +02:00
Nicolas Pitre
67706a1802 kernel: mem_slab: reverse free list initialization
As it is, blocks are allocated going backward within the buffer.
There is nothing fundamentally wrong with that, but it makes debugging
unnatural with the successively descending addresses. Create the free
list so pointers are oriented forward, at least initially.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-07 21:43:28 +02:00
frei tycho
4c2938a295 kernel: added missing parenthesis
- added missing parenthesis around macro argument expansion

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-06-07 12:59:46 +02:00
Peter Mitsis
0bcdae2c62 kernel: Add CONFIG_ARCH_HAS_DIRECTED_IPIS
Platforms that support IPIs allow them to be broadcast via the
new arch_sched_broadcast_ipi() routine (replacing arch_sched_ipi()).
Those that also allow IPIs to be directed to specific CPUs may
use arch_sched_directed_ipi() to do so.

As the kernel has the capability to track which CPUs may need an IPI
(see CONFIG_IPI_OPTIMIZE), this commit updates the signalling of
tracked IPIs to use the directed version if supported; otherwise
they continue to use the broadcast version.

Platforms that allow directed IPIs may see a significant reduction
in the number of IPI related ISRs when CONFIG_IPI_OPTIMIZE is
enabled and the number of CPUs increases.  These platforms can be
identified by the Kconfig option CONFIG_ARCH_HAS_DIRECTED_IPIS.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 22:35:54 -04:00
Peter Mitsis
d8a4c8a90c kernel: Add CONFIG_IPI_OPTIMIZE
The CONFIG_IPI_OPTIMIZE configuration option allows for the flagging
and subsequent signaling of IPIs to be optimized.

It does this by making each bit in the kernel's pending_ipi field
a flag that indicates whether the corresponding CPU might need an IPI
to trigger the scheduling of a new thread on that CPU.

When a new thread is made ready, we compare that thread against each
of the threads currently executing on the other CPUs. If there is a
chance that that thread should preempt the thread on the other CPU
then we flag that an IPI is needed for that CPU. That is, a clear bit
indicates that the CPU absolutely will not need to reschedule, while a
set bit indicates that the target CPU must make that determination for
itself.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 22:35:54 -04:00
Peter Mitsis
9ff5221d23 kernel: Update IPI usage in k_thread_priority_set()
1. The flagging of IPIs is moved out of k_thread_priority_set() into
z_thread_prio_set(). This allows for an IPI to be done for a thread
that had its priority bumped due to the handling of priority
inheritance from a mutex.

2. k_thread_priority_set()'s check for sched_locked only applies to
non-SMP builds that are using the old arch_swap() framework to switch
between threads.

Incidentally, nearly all calls to flag_ipi() are now performed with
sched_spinlock being locked. The only exception is in slice_timeout().

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 22:35:54 -04:00
Peter Mitsis
ed7a5f31c2 kernel: Update CONFIG_PIPES Kconfig description
Updates the CONFIG_PIPES Kconfig description to add a note that
enabling it will cause a slight increase to the thread structure.
This mirrors a similar comment in CONFIG_EVENTS.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-06-04 19:10:56 -04:00
Yong Cong Sin
6a3cb93d88 arch: remove the use of z_arch_esf_t completely from internal
Created `GEN_OFFSET_STRUCT` & `GEN_NAMED_OFFSET_STRUCT` that
works for `struct`, and remove the use of `z_arch_esf_t`
completely.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Yong Cong Sin
e54b27b967 arch: define struct arch_esf and deprecate z_arch_esf_t
Make `struct arch_esf` compulsory for all architectures by
declaring it in the `arch_interface.h` header.

After this commit, the named struct `z_arch_esf_t` is only used
internally to generate offsets, and is slated to be removed
from the `arch_interface.h` header in the future.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-04 14:02:51 -05:00
Flavio Ceolin
65fc5b7f17 device: Remove z_device_is_ready
This duplicates the functionality of device_is_ready.

Calls for z_device_is_ready are being done in kernel mode, so it is
safe to call its implementation directly.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-31 08:06:44 +02:00
Yong Cong Sin
3570408db5 build: namespace syscall sources to zephyr/
Namespace the `syscall_dispatch.c` & `syscall_export_llext.c`
to `zephyr/` as well

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-28 22:03:55 +02:00
Yong Cong Sin
bbe5e1e6eb build: namespace the generated headers with zephyr/
Namespaced the generated headers with `zephyr` to prevent
potential conflict with other headers.

Introduce a temporary Kconfig `LEGACY_GENERATED_INCLUDE_PATH`
that is enabled by default. This allows the developers to
continue the use of the old include paths for the time being
until it is deprecated and eventually removed. The Kconfig will
generate a build-time warning message, similar to the
`CONFIG_TIMER_RANDOM_GENERATOR`.

Updated the includes path of in-tree sources accordingly.

Most of the changes here are scripted, check the PR for more
info.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-05-28 22:03:55 +02:00
Fin Maaß
8c37f14b98 tracing: add k_realloc trace
For `k_realloc` add tracing feature.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2024-05-28 17:55:12 +02:00
Fin Maaß
09eaa8757f kernel: implement k_realloc
implement k_realloc.

Signed-off-by: Fin Maaß <f.maass@vogl-electronic.com>
2024-05-28 17:55:12 +02:00
Hess Nathan
20b55425d3 coding guidelines: comply with MISRA Rule 13.4
avoid the direct use of assignment expression
values for conditions

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-28 10:07:31 +02:00
Flavio Ceolin
4d85f3d91c pm: Deprecate z_pm_save_idle_exit
Deprecate z_pm_save_idle_exit and promote pm_system_resume.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-27 02:10:03 -07:00
Flavio Ceolin
f7437ac3b1 pm: Move z_pm_save_idle_exit to pm subsys
There is no need to this function be defined inside the kernel since
all places using it are protecting the call under ifdef PM guards.

This way we can also remove the ifdef condition inside the implementation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-27 02:10:03 -07:00
Nicolas Pitre
b34e94f362 kernel: demand_paging: fix arch_page_location_get() documentation
Symbols from enum arch_page_location are defined as
ARCH_PAGE_LOCATION_* and not ARCH_PAGE_FAULT_*.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-24 07:47:49 -04:00
Peter Mitsis
d082cd29af kernel: Relax loop in z_smp_global_lock()
Updates z_smp_global_lock() to follow the pattern used in spinlocks
to relax the loop between atomic_cas() attempts.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-05-22 21:35:06 -04:00
Flavio Ceolin
c12f0507b6 userspace: dynamic: Fix k_thread_stack_free verification
k_thread_stack_free syscall was not checking if the caller
had permission to given stack object.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-21 20:54:56 -04:00
Daniel Leung
e6abc035c8 kernel: mem_domain: new config for isolated stacks
This adds a new kconfig to indicate if architecture code
supports isolating thread stacks within the same domain,
and another new kconfig to selectively enable this
behavior.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-21 20:53:09 -04:00
Daniel Leung
169bc07e83 kernel: move memory domain kconfigs into its own file
This moves memory domain related kconfigs into its own file
Kconfig.mem_domain.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-21 20:53:09 -04:00
Andy Ross
17a5beb341 kernel: Predicate _cpus_active on CONFIG_PM
This value isn't used outside of the PM subsystem, so don't build it.

More important than the four bytes of .bss was the use of an
atomic_inc().  Some platforms are forced to use
CONFIG_ATOMIC_OPERATIONS_C (but in almost all cases are single-core
devices that won't use atomics at runtime).  There, this turns into a
function call that pulls in the whole atomics implementation.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-21 15:42:50 -07:00
Daniel Leung
2ad265cb75 kernel: userspace: manipulate _thread_idx_map on per-byte basis
The sys_bitfield_(clear/set)_bit() work on pointer size element.
However, _thread_idx_map[] is a byte array. On little endian
systems, the bitops should work fine. However, on big endian
systems, changing the lower bits may actually be manipulating
memory outside the array when CONFIG_MAX_THREAD_BYTES is not
multiple of 4. So modify the code to perform bit ops on
a per-byte basis.

Fixes #72430

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-05-18 15:53:27 +03:00
Nicolas Pitre
e9a47d932c kernel: mmu: shrink and align struct z_page_frame
The struct z_page_frame is marked __packed to avoid extra padding as
such padding may represent significant memory waste when lots of page
frames are used. However this is a bad strategy.

The code contained this somewhat dubious comment and code in
free_page_frame_list_put():

	/* The structure is packed, which ensures that this is true */
	void *node = pf;
	sys_slist_append(&free_page_frame_list, node);

This is bad for many reasons:

- type checking is completely bypassed;

- if the sys_snode_t node member is no longer located at the front of
  struct z_page_frame then the code will still compile and possibly run
  but be broken with memory corruption as a likely outcome;

- the sys_slist_append() code is completely unaware of the packed
  attribute which breaks architectures with alignment restrictions.

Let's improve code efficiency as well as memory usage by removing the
packed attribute and manually packing the flags in the unused virtual
address bits. This way the page frame array remains naturally aligned,
data access becomes optimal and the actual array size gets even smaller.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Nicolas Pitre
57305971d1 kernel: mmu: abstract access to page frame flags and address
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-05-13 16:04:40 -04:00
Daniel Apperloo
9fc26804fb linker: decouple KERNEL_WHOLE_ARCHIVE from LLEXT
Dynamic code execution applications not using LLEXT for "extension"
loading are subject to the same linker optimization symbol resolution
issue described in commit 321e395 (in summary, libkernel.a syscalls
not used directly by the application result in weak symbol resolution
of their z_mrsh_ wrapper).

To support usecases where an application is using alternative methods
to load and execute code calling syscalls (likely from userspace) or
is using a mechanism where the linker may not be aware, the configuration
option has been decoupled from CONFIG_LLEXT (who is now a selector) to
KERNEL_WHOLE_ARCHIVE.

Signed-off-by: Daniel Apperloo <daniel.apperloo@intel.com>
2024-05-13 14:23:38 +02:00
Hess Nathan
6d417d52c2 coding guidelines: comply with MISRA Rule 12.1.
added parentheses verifying lack of ambiguities

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-12 13:37:27 -04:00
Hess Nathan
e05c4a8786 coding guidelines: comply with MISRA Rule 11.8
- modified parameter types to receive a const pointer when a
  non-const pointer is not needed

- avoided redundant casts

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-10 14:45:14 -05:00
Flavio Ceolin
68ea73aca2 kernel: sem: Remove constant expression
limit is unsigned int and K_SEM_MAX_LIMIT is defined as UINT_MAX this
means that limit will never be greater K_SEM_MAX_LIMIT.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-05-09 12:39:46 -04:00
Pieter De Gendt
f147a5fec2 spelling: Replace occurrences of "iff" with "if and only if"
Spell checking tools do not recognize "iff", replace with "if and only if".
See https://en.wikipedia.org/wiki/If_and_only_if

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-05-06 14:58:08 +01:00
frei tycho
fe38c703b2 kernel: coding guidelines: cast unused arguments to void
- added missing ARG_UNUSED

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-06 14:56:24 +01:00
Alberto Escolar Piedras
2f5e93938b Revert "kernel: retrieve system timer clock frequency at runtime or static"
This reverts commit 7c03e5de7f.

https://github.com/zephyrproject-rtos/zephyr/pull/69705
Introduced a regression in main in which
tests/subsys/logging/log_timestamp
started failing. (See
https://github.com/zephyrproject-rtos/zephyr/issues/72344
for more info).
Let's revert the PR. It can be submitted after with the issue
fixed.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-05-06 14:52:29 +03:00
Najumon B.A
7c03e5de7f kernel: retrieve system timer clock frequency at runtime or static
update kernel timeout logic based on retrieve system timer clock
frequency at runtime or static way based on Kconfig
TIMER_READS_ITS_FREQUENCY_AT_RUNTIME

Signed-off-by: Najumon B.A <najumon.ba@intel.com>
2024-05-04 13:24:12 +03:00
Andy Ross
dec022a848 kernel/sched: Fix edge^2 case in abort/join
The previous abort-lifecycle fix missed a case: other threads can
enter k_thread_join(), see that the thread is already dead, and then
need to call z_thread_switch_spin() to wait for a context switch.  But
the new "dummification" code was (by design!) terminating the thread
such that no context would be saved to it.  So switch_handle stayed
NULL and if you hit that timing case correctly[1] you'd deadlock
waiting for a switch that would never come.

Fix is just to set switch_handle when dummifying to any non-NULL
value.

Also add an assertion to catch the obvious case that a thread is
actually dead on the exit path of k_thread_abort() to make sure the
variant path continues to set flags correctly

[1] CI was doing it fairly reliably via tests/kernel/smp_abort on
    qemu_cortex_a53 only.  Only one of my dev systems could see it,
    and then only about 15% of the time.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
47ab66311d kernel/sched: Fix lockless ordering in halt_thread()
We've had threads spinning on the thread state bits, but weren't being
careful to ensure that those bits were the last things seen to change
in a halting thread.  Move it to the end, and add a barrier for
correctness.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
fd340ebf31 sched: Optimize dummy thread usage on SMP
Nicolas Pitre points out that since these thread structs are just
dummies for the context swtiching, they can be presumed to be "write
only" and thus there's no point in having one per CPU, everyone can
share the same one.

The only gotcha is that we never really documented (nor really have a
place to document) that rule, so it's not theoretically impossible for
an architecture to read back what it might have written underneath
arch_switch().  Leave this in a separate commit for bisection
purposes, but the risk seems very low.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
f0fd54cb31 kernel/sched: Fix free-memory write when ISRs abort _current
After a k_thread_abort(), the resulting thread struct is documented as
unused/free memory that may be re-used (for example, to respawn a new
thread).

But in the special case of aborting the current thread from within an
ISR, that wasn't quite happening.  The scheduler cleanup would
complete, but the architecture layer would still try to context switch
away from the aborted thread on exit, and that can include writes to
the now-reused thread struct!  The specifics will depend on
architecture (some do a full context save on entry, most don't), but
in the case of USE_SWITCH=y it will at the very least write the
switch_handle field.

Fix this simply, with a per-cpu "switch dummy" thread struct for use
as a target for context switches like this.  There is some non-trivial
memory cost to that; thread structs on many architectures are large.

Pleasingly, this also addresses a known deadlock on SMP: because the
"spin in ISR" step now happens as the very last stage of
k_thread_abort() handling, the existing scheduler lock works to
serialize calls such that it's impossible for a cycle of threads to
independently decide to spin on each other: at least one will see
itself as "already aborting" and break the cycle.

Fixes #64646

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
Andy Ross
fc56050e05 kernel/spinlock: Fix SPIN_VALIDATE in ISRs
Spinlocks taken in ISRs were storing the _current thread pointer of
the interrupted thread as the owner, which was never strictly correct
but was benign as the thread would never run until the lock was
released.

But now k_thread_abort(_current) in an ISR has been fixed to eliminate
all references to the (now aborted) thread struct, and _current points
to a dummy thread.  Handle that edge case in the validation framework.

Signed-off-by: Andy Ross <andyross@google.com>
2024-05-02 13:55:03 -04:00
frei tycho
14cb7d5b03 kernel: coding guidelines: add explicit cast to void
- added explicit cast to void when returned value is expectedly ignored

Signed-off-by: frei tycho <tfrei@baumer.com>
2024-05-02 16:49:36 +02:00
Hess Nathan
7659cfd4dc coding guidelines: comply with MISRA Rule 2.2
- avoided dead stores

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-02 09:32:46 +01:00
Adrian Bonislawski
e44d2e65ee kernel: timeslicing: add time slice reset in slice per thread api
This will reset time slice in k_thread_time_slice_set()
when slice per thread api is used.

Currently it will reset it only in standard slice_set

Signed-off-by: Adrian Bonislawski <adrian.bonislawski@intel.com>
2024-05-01 22:55:50 +01:00
Hess Nathan
527e712448 coding guidelines: comply with MISRA Rule 20.9
- avoid to use undefined macros in #if expressions

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 19:48:19 +01:00
Hess Nathan
32af724fbb coding guidelines: comply with MISRA C:2012 Rule 11.2
avoid convert pointers to incomplete type using the pointer to first item

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:53:20 -04:00
Hess Nathan
c30a9c4c97 coding guidelines: comply with MISRA Rule 21.15
- made explicit the copied data type

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-05-01 10:52:43 -04:00
Peter Mitsis
a3c7152f92 kernel: Update thread cpu in z_get_next_switch_handle()
Updates z_get_next_switch_handle() to set the new thread's base.cpu
value as it is done in do_swap(). This helps to ensure that the
last CPU on which the thread executed remains current.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2024-04-29 17:40:28 +01:00
Eric Johnson
69c5c6d511 kernel: Remove duplicate execution_cycles write and improve docstring
There is a duplicate write in `z_sched_thread_usage()` that can be
removed. Also modified the docstrings to `k_thread_runtime_stats` to
help better describe the differences between execution_cycles and
total_cycles when getting stats for the CPU or a thread

Signed-off-by: Eric Johnson <eric@memfault.com>
2024-04-28 13:04:20 -04:00