Add instrumentation hooks for transitions and actions, as well as
error states. This will allow us to track the state of the state
machine during testing, and also provide a way to debug issues
that may arise during execution.
Signed-off-by: Glenn Andrews <andrewsglenn@meta.com>
At FULL hardening, trailer canaries on used chunks implicitly guard
adjacent free chunk headers: a sequential buffer overflow must corrupt
the used chunk's trailer before reaching the next header. However,
when a free neighbor's metadata is needed for merging during free(),
the neighbor's header could already have been corrupted by an overflow
from its left used chunk that hasn't been freed yet. For example:
[hdr_U1] [data_U1] [trailer_U1] [hdr_F] [...] [hdr_U2] [trailer_U2]
If data_U1 overflows past trailer_U1 and corrupts hdr_F, freeing U2
would use hdr_F's corrupted size and free-list pointers for merging,
potentially leading to heap structure corruption or arbitrary writes.
Additionally, a corrupted LEFT_SIZE in a used chunk being freed can
point to a fake header crafted inside the left neighbor's data area:
[hdr_U1] [data_U1 ..fake_hdr.. trailer_U1'] [hdr_U2'] [data_U2] ...
| corrupted LEFT_SIZE points
| by overflow to fake_hdr
+<--------------------------+
A determined attacker can make fake_hdr's size field self-consistent
with the corrupted LEFT_SIZE so that the structural round-trip checks
pass. If fake_hdr is marked "used", free_chunk() skips the left
merge and the corruption goes undetected. If marked "free", it
triggers a bogus merge with attacker-controlled free-list pointers.
Both cases are caught by verifying the left used neighbor's canary:
any overflow from the left must pass through trailer_U1 to reach
hdr_U2's LEFT_SIZE field, so the corrupted canary acts as a tripwire
regardless of whether the resulting fake header is marked used or free.
Address this by introducing free_chunk_check() which validates a free
chunk's structural integrity before trusting its header fields. It
consolidates the existing MODERATE-level structural checks (chunk
linkage, used-bit consistency) with a new FULL-level canary
verification of the left used neighbor. Factoring these checks into
a dedicated function lets callers specify @left_trusted according to
context (e.g. the chunk being freed is the left neighbor of the right
merge candidate, so its canary need not be rechecked).
In inplace_realloc()'s shrink path, the freed suffix's left neighbor
is always the chunk being reallocated (used, just validated). This
path inlines only the right merge to avoid the unnecessary left canary
check.
The check is called before every free list removal: in free_chunk()
for left/right merge candidates, in alloc_chunk() when pulling from
a bucket, and in inplace_realloc() before consuming the right
neighbor.
Also give chunk0 a canary trailer so that the left-neighbor canary
check works uniformly for the first free chunk in the heap.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Replace the standalone SYS_HEAP_CANARIES bool with a tiered
SYS_HEAP_HARDENING choice controlling runtime validation:
NONE (0) - no checks
BASIC (1) - double-free and overflow detection in free/realloc
MODERATE (2) - free list and neighbor consistency checks
FULL (3) - trailer canary on every allocation
EXTREME (4) - exhaustive heap validation on every operation
Default is MODERATE when ASSERT is enabled, BASIC otherwise.
Hardening checks are independent of CONFIG_ASSERT: they use
LOG_ERR + k_panic() instead of __ASSERT so the configured level
is always honored regardless of assertion settings.
Also adds heap logging via LOG_MODULE_REGISTER. This paves the way for
eventual permanent debugging instrumentation.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Factor out the core heap validation logic into z_heap_full_check()
which takes a struct z_heap pointer directly. This allows internal
callers (e.g. alloc_chunk) to validate the heap without needing
the public struct sys_heap wrapper.
sys_heap_validate() becomes a thin wrapper that calls
z_heap_full_check() and then validates runtime stats.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add optional canary values at the end of each heap allocation to detect
memory corruption. The canary is validated when memory is freed, catching
buffer overflows (writing past allocation) and double-free errors.
The canary is computed from the chunk address and size, XORed with a
magic value. On free, it is checked and then poisoned to detect
double-free attempts.
The canary is stored as trailer data at the end of the chunk rather than
in the header to avoid complicating aligned allocation processing, and
because buffer overflows are most likely to overwrite past the buffer end
anyway.
This adds 8 bytes of memory overhead per allocation and a canary
computation on alloc and validation on free. It is useful for hardening
against memory corruption as well as for chasing bugs during
development. The trailer structure can be readily extended to carry
additional per-allocation metadata if so desired.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The function checks if a chunk is too small to be added to the free list.
The new name better reflects its purpose and the comparison now uses
min_chunk_size() for clarity.
Such chunks are not added to the free list because they would be too
small to be allocatable, and they might be too small to store the free
list pointers. It happens that min_chunk_size() is always >= the free
pointer storage size.
The big_heap() condition short-circuits the comparison with a build-time
constant when undersized chunks cannot occur.
A following commit will rely on this to exclude chunks that are smaller
than min_chunk_size() which will grow to account for trailer data.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Replace chunksz_to_bytes() which took a chunk size with
chunk_usable_bytes() which takes a chunk_id directly. This eliminates
the redundant pattern of chunksz_to_bytes(h, chunk_size(h, c)) and makes
the API clearer by returning actual usable bytes (excluding the header).
Add mem_align_gap() helper to compute alignment padding between the
start of usable chunk memory and the actual memory pointer, using
efficient bit masking. This simplifies sys_heap_usable_size() and
inplace_realloc(). Use these helpers to make runtime stats reporting
reflect actual usable chunk memory (excluding chunk headers), and heap
listener notifications also account for alignment gaps.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Z_HEAP_MIN_SIZE and Z_HEAP_MIN_SIZE_FOR were defined in kernel.h as
hardcoded magic numbers gated by a growing tower of #ifdefs — one
per Kconfig option that happened to affect the heap struct layout.
Every internal change required manually recomputing the constants,
duplicating layout knowledge across files, and praying nobody forgot
to update the #ifdef matrix. This is fragile and unscalable: adding
a single new heap feature (e.g. a chunk canary trailer) would add yet
another dimension to the combinatorial explosion.
Replace this with build-time computation from the actual C structures.
A new lib/heap/heap_constants.c uses GEN_ABSOLUTE_SYM to emit the
correct values into a generated heap_constants.h header via the
zephyr_constants_library() infrastructure. Z_HEAP_MIN_SIZE is
derived through an iterative fixed-point expansion (3 rounds, always
convergent) that mirrors the runtime logic in sys_heap_init().
Big vs small heap determination uses CONFIG_SYS_HEAP_SMALL_ONLY,
CONFIG_SYS_HEAP_BIG_ONLY, and sizeof(void *), mirroring the
big_heap_chunks() logic in heap.h.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Replace ternary operator with if-else to avoid mixing signed and unsigned
types in the conditional expression. This eliminates the compiler warning
while preserving the original logic.
Fixes#104581
Signed-off-by: Roman Bakshansky <bakshansky@protonmail.com>
Add missing memory barriers after branching on k_is_user_context() to
prevent reordering possible of privileged memory access.
Signed-off-by: Adrian Warecki <adrian.warecki@intel.com>
The EBUSY return condition was incorrect and e.g. sem_destroy was in some
cases reporting this error even if no thread was blocked by this semaphore.
This commit fixes that by checking the waitqueue instead of the semaphore
count.
Signed-off-by: Jakub Michalski <jmichalski@antmicro.com>
The UUID library has been present as an experimental library in
the Zephyr code base since v4.2.
Since no need for major API changes has emerged in the last two
Zephyr version the library can be safely promoted to unstable.
Signed-off-by: Simone Orru <simone.orru@secomind.com>
At FULL hardening, trailer canaries on used chunks implicitly guard
adjacent free chunk headers: a sequential buffer overflow must corrupt
the used chunk's trailer before reaching the next header. However,
when a free neighbor's metadata is needed for merging during free(),
the neighbor's header could already have been corrupted by an overflow
from its left used chunk that hasn't been freed yet. For example:
[hdr_U1] [data_U1] [trailer_U1] [hdr_F] [...] [hdr_U2] [trailer_U2]
If data_U1 overflows past trailer_U1 and corrupts hdr_F, freeing U2
would use hdr_F's corrupted size and free-list pointers for merging,
potentially leading to heap structure corruption or arbitrary writes.
Additionally, a corrupted LEFT_SIZE in a used chunk being freed can
point to a fake header crafted inside the left neighbor's data area:
[hdr_U1] [data_U1 ..fake_hdr.. trailer_U1'] [hdr_U2'] [data_U2] ...
| corrupted LEFT_SIZE points
| by overflow to fake_hdr
+<--------------------------+
A determined attacker can make fake_hdr's size field self-consistent
with the corrupted LEFT_SIZE so that the structural round-trip checks
pass. If fake_hdr is marked "used", free_chunk() skips the left
merge and the corruption goes undetected. If marked "free", it
triggers a bogus merge with attacker-controlled free-list pointers.
Both cases are caught by verifying the left used neighbor's canary:
any overflow from the left must pass through trailer_U1 to reach
hdr_U2's LEFT_SIZE field, so the corrupted canary acts as a tripwire
regardless of whether the resulting fake header is marked used or free.
Address this by introducing free_chunk_check() which validates a free
chunk's structural integrity before trusting its header fields. It
consolidates the existing MODERATE-level structural checks (chunk
linkage, used-bit consistency) with a new FULL-level canary
verification of the left used neighbor. Factoring these checks into
a dedicated function lets callers specify @left_trusted according to
context (e.g. the chunk being freed is the left neighbor of the right
merge candidate, so its canary need not be rechecked).
In inplace_realloc()'s shrink path, the freed suffix's left neighbor
is always the chunk being reallocated (used, just validated). This
path inlines only the right merge to avoid the unnecessary left canary
check.
The check is called before every free list removal: in free_chunk()
for left/right merge candidates, in alloc_chunk() when pulling from
a bucket, and in inplace_realloc() before consuming the right
neighbor.
Also give chunk0 a canary trailer so that the left-neighbor canary
check works uniformly for the first free chunk in the heap.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Replace the standalone SYS_HEAP_CANARIES bool with a tiered
SYS_HEAP_HARDENING choice controlling runtime validation:
NONE (0) - no checks
BASIC (1) - double-free and overflow detection in free/realloc
MODERATE (2) - free list and neighbor consistency checks
FULL (3) - trailer canary on every allocation
EXTREME (4) - exhaustive heap validation on every operation
Default is MODERATE when ASSERT is enabled, BASIC otherwise.
Hardening checks are independent of CONFIG_ASSERT: they use
LOG_ERR + k_panic() instead of __ASSERT so the configured level
is always honored regardless of assertion settings.
Also adds heap logging via LOG_MODULE_REGISTER. This paves the way for
eventual permanent debugging instrumentation.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Factor out the core heap validation logic into z_heap_full_check()
which takes a struct z_heap pointer directly. This allows internal
callers (e.g. alloc_chunk) to validate the heap without needing
the public struct sys_heap wrapper.
sys_heap_validate() becomes a thin wrapper that calls
z_heap_full_check() and then validates runtime stats.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Add optional canary values at the end of each heap allocation to detect
memory corruption. The canary is validated when memory is freed, catching
buffer overflows (writing past allocation) and double-free errors.
The canary is computed from the chunk address and size, XORed with a
magic value. On free, it is checked and then poisoned to detect
double-free attempts.
The canary is stored as trailer data at the end of the chunk rather than
in the header to avoid complicating aligned allocation processing, and
because buffer overflows are most likely to overwrite past the buffer end
anyway.
This adds 8 bytes of memory overhead per allocation and a canary
computation on alloc and validation on free. It is useful for hardening
against memory corruption as well as for chasing bugs during
development. The trailer structure can be readily extended to carry
additional per-allocation metadata if so desired.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The function checks if a chunk is too small to be added to the free list.
The new name better reflects its purpose and the comparison now uses
min_chunk_size() for clarity.
Such chunks are not added to the free list because they would be too
small to be allocatable, and they might be too small to store the free
list pointers. It happens that min_chunk_size() is always >= the free
pointer storage size.
The big_heap() condition short-circuits the comparison with a build-time
constant when undersized chunks cannot occur.
A following commit will rely on this to exclude chunks that are smaller
than min_chunk_size() which will grow to account for trailer data.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Replace chunksz_to_bytes() which took a chunk size with
chunk_usable_bytes() which takes a chunk_id directly. This eliminates
the redundant pattern of chunksz_to_bytes(h, chunk_size(h, c)) and makes
the API clearer by returning actual usable bytes (excluding the header).
Add mem_align_gap() helper to compute alignment padding between the
start of usable chunk memory and the actual memory pointer, using
efficient bit masking. This simplifies sys_heap_usable_size() and
inplace_realloc(). Use these helpers to make runtime stats reporting
reflect actual usable chunk memory (excluding chunk headers), and heap
listener notifications also account for alignment gaps.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Z_HEAP_MIN_SIZE and Z_HEAP_MIN_SIZE_FOR were defined in kernel.h as
hardcoded magic numbers gated by a growing tower of #ifdefs — one
per Kconfig option that happened to affect the heap struct layout.
Every internal change required manually recomputing the constants,
duplicating layout knowledge across files, and praying nobody forgot
to update the #ifdef matrix. This is fragile and unscalable: adding
a single new heap feature (e.g. a chunk canary trailer) would add yet
another dimension to the combinatorial explosion.
Replace this with build-time computation from the actual C structures.
A new lib/heap/heap_constants.c is compiled as part of the offsets
library and uses GEN_ABSOLUTE_SYM to emit the correct values into the
generated offsets.h. Z_HEAP_MIN_SIZE is derived through an iterative
fixed-point expansion (3 rounds, always convergent) that mirrors the
runtime logic in sys_heap_init(). Z_HEAP_MIN_SIZE_FOR overhead and
bucket sizes are also generated, keeping all internal heap layout
knowledge in one place.
Big vs small heap determination uses CONFIG_SYS_HEAP_SMALL_ONLY,
CONFIG_SYS_HEAP_BIG_ONLY, and sizeof(void *), mirroring the
big_heap_chunks() logic in heap.h.
kernel.h picks up the generated values via
__has_include(<zephyr/offsets.h>) so there is no circular dependency
with the offsets compilation itself. The old _Z_HEAP_SIZE manual
sizeof and BUILD_ASSERT scaffolding in heap.c are removed.
gen_offset_header.py is updated to accept multiple input object files
so that the heap constants object can coexist with the per-arch offsets
object in the same offsets library. COMMAND_EXPAND_LISTS is added to
the offsets generation custom command so that CMake correctly expands
the $<TARGET_OBJECTS:> generator expression into separate arguments
when the offsets library contains more than one object file.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Use the zvfs macros in the code of the module itself, instead of using
the versions from the POSIX API, and remove the header that defined those
as it is not needed anymore.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
Calling sys_heap_runtime_stats_get on a valid but uninitialized heap
results in a NULL pointer access. Add an additional check in this
function that the heap is valid and has also been initialized.
Signed-off-by: Graham Roff <grahamr@qti.qualcomm.com>
Add option to use default alignment when building a cbprintf package
on riscv (rv32e). It is useful in case when cbprintf packages are not
formatted on rv32e but on another core. There is such case on nrf54h20
where log messages are formatted by the ARM Cortex M33 core (cpuapp)
and without this option 64 bit arguments are incorrectly formatted.
Signed-off-by: Krzysztof Chruściński <krzysztof.chruscinski@nordicsemi.no>
Any leading -D passed to target_compile_definitions on an item will
be removed, here remove them to make code style consistent.
Signed-off-by: Paul He <pawpawhe@gmail.com>
There is no need to pull in POSIX types in either of the modified files,
so remove the `<sys/types.h>` inclusion.
Signed-off-by: Chris Friedt <chris@fr4.co>
Add support for retrieving heap stats for the malloc heap in the common
libc malloc implementation. This provides the ability to see free and
used bytes using the same structure as the kernel heap.
Without this there is no method to retrieve the malloc heap usage as the
relevant sys_heap structure is private.
Signed-off-by: Graham Roff <grahamr@qti.qualcomm.com>
Add a macro to approximate the heap size required by a heap in order for
an allocation of N bytes to succeed. The intent is to provide a way for
small heaps to more accurately specify their sizes.
Signed-off-by: Jordan Yates <jordan@embeint.com>
Embeds both an anonymous union and an anonymous structure within the
k_spinlock structure to ensure that the structure can easily have a
non-zero size.
This new option provides a cleaner way to specify that the
spinlock structure must have a non-zero size. A non-zero size
is necessary when C++ support is enabled, or when a library
or application wants to create an array of spinlocks.
Fixes#59922
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
`armclang` doesn't support picolibc right now so disable it and fix
few issues related to it as below:
```
zephyrproject/zephyr/lib/libc/validate_libc.c:17:14: error: static
assertion failed due to requirement 'sizeof(unsigned int) >= 8': time_t
cannot hold 64-bit values
17 | BUILD_ASSERT(sizeof(time_t) >= 8, "time_t cannot hold 64-bit
values");
```
Signed-off-by: Sudan Landge <sudan.landge@arm.com>
Check and handle errors returned by pthread_create() when using
SIGEV_THREAD notification.
Previously, the return value was ignored, which could lead to silent
failures.
Proper error handling is added to propagate failures and set errno
accordingly.
Signed-off-by: Gaetan Perrot <gaetan.perrot@spacecubics.com>
Added ifdef guard (CONFIG_GETOPT_LONG) around the functions in
getopt_shim.c that requires getopt_long implementation.
Signed-off-by: Magne Værnes <magne.varnes@nordicsemi.no>
Do not add this folder to the include path when this component is not
enabled. As that creates noise and slows down builds.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
CONFIG_XOPEN_STREAMS does not follow the pattern of other XSI Option
Groups, where the Option Group name is not the same as the feature
test macro that indicates it is supported by the implementation.
Deprecate CONFIG_XOPEN_STREAMS and rename it to CONFIG_XSI_STREAMS.
For more information, please see
https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/\
V1_chap02.html#tag_02_01_05_09
Signed-off-by: Chris Friedt <chris@fr4.co>
Fixes this define leaking into all application source files when
the feature is not even enabled
Co-authored-by: Chris Friedt <cfriedt@tenstorrent.com>
Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
k_condvar_broadcast does not error. It returns the number of
woken threads on success. We should not assert any value.
Signed-off-by: Marco Casaroli <marco.casaroli@gmail.com>
When eventfd is used through read(2) and write(2), the mutex is
already locked from the fdtable implementation. So we remove the
usage of the mutex from the zvfs_eventfd_*_op functions, as it is
already managed by fdtable.
However, when zvfs_eventfd_{read,write} are used, no fdtable layer
is used and we shuld call the _op function with the mutex locked
(the same behavior as with fdtable), so these functions should
manage the mutex. We add it there.
Fixes#99234
Signed-off-by: Marco Casaroli <marco.casaroli@gmail.com>
C99 § 7.19.6.5 defines `snprintf`. According to ¶ 2:
> If `n` is zero, nothing is written, and `s` may be a null pointer.
And according to § 7.19.6.12 ¶ 2:
> The `vsnprintf` function is equivalent to `snprintf` (...)
However, prior to this change, `vsnprintfcb` (and indirectly, `snprintfcb`)
unconditionally null-terminates the output buffer.
This fixes#48394, which was auto-closed without actually being fixed.
Co-authored-by: Adrien Lessard <adrien.lessard@rbr-global.com>
Signed-off-by: Samuel Coleman <samuel.coleman@rbr-global.com>
Previously, eventfd file descriptors were not being counted against the
required size for the global file descriptor table, which would result
in the function `eventfd()` (and `zvfs_eventfd()`) failing due to
insufficient resources.
Signed-off-by: Chris Friedt <chris@fr4.co>
The tolower() function takes an int parameter. LLVM compilers generate a
warning if a char is passed instead.
Signed-off-by: Keith Short <keithshort@google.com>
A couple of tests were inconsistent with glibc and picolibc.
Significant rework done to the `fnmatch()` implementation which included
refreshing that and the `rangematch()` implementations from commit
0a3b2e376d150258c8294c12a85bec99546ab84b
in https://github.com/lattera/freebsd
Removed `match_posix_class()` and implemented that functionality as
`rangematch_cc()`, which uses 64-bit integer comparison for matching
`[:alnum:]` et al instead of string comparison. That likely only works
for the "C" locale.
Signed-off-by: Chris Friedt <cfriedt@tenstorrent.com>
Signed-off-by: Harun Spago <harun.spago.code@gmail.com>