Commit graph

3,317 commits

Author SHA1 Message Date
Peter Mitsis
864e648e68 kernel: Add ifdef guard around ipi_lock definition
The global variable ipi_lock is both local to the file ipi.c and
only used when CONFIG_SCHED_IPI_SUPPORTED is enabled. As such its
definition should be wrapped with an ifdef.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-12-05 10:55:32 +02:00
Peter Mitsis
c08905ecc9 kernel: Add thread runtime stack safety
Adds support for thread runtime stack safety. This kernel feature
allows a developer to run enhanced stack usage checks on threads
such that if the amount of unused stack space drops below a thread's
configured threshold, it will invoke a custom handler/callback.

This can be used by monitoring software to log warnings, suspend
or abort threads, or even reboot the system.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 19:25:44 +00:00
Peter Mitsis
ce6c26a927 kernel: Simplify move_current_to_end_of_prio_q()
It is now more obvious that the move_current_to_end_or_prio_q() logic
is supposed to match that of k_yield() (without the schedule point).

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Peter Mitsis
77ad7111e1 kernel: Rename move_thread_to_end_of_prio_q()
All instances of the internal routine move_thread_to_end_of_prio_q()
use the current thread. Renaming it to move_current_to_end_of_prio_q()
to reflect that.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Peter Mitsis
ffc6c8839b kernel: Rename z_move_thread_to_end_of_prio_q()
The routine z_move_thread_to_end_of_prio_q() has been renamed to
z_yield_testing_only() as it was only both only used for test code
and always operated on the current thread.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2025-11-25 17:37:52 +00:00
Liu Qian
a2ca8b9b0a device: remove duplicate code
API z_device_state_init has already defined in init.c

Signed-off-by: Liu Qian <liuqian.andy@picoheart.com>
2025-11-24 17:33:13 +01:00
Jamie McCrae
b128e51994 kernel: kconfig: Disable DEVICE_DEINIT_SUPPORT by default
This Kconfig, which it itself admits is for a "very specific case"
was set to default as yes, this includes extra code in drivers with
this functionality and increases driver struct size for cases where
this function isn't needed (i.e. all because it's enabled by
default), therefore change it to be opt-in rather than opt-out

Signed-off-by: Jamie McCrae <jamie.mccrae@nordicsemi.no>
2025-11-20 17:14:50 +00:00
Yong Cong Sin
3c5807f6ec arch: riscv: stacktrace: support stacktrace in early system init
Add support for stacktrace in dummy thread which is used to run
the early system initialization code before the kernel switches
to the main thread.

On RISC-V, the dummy thread will be running temporarily on the
interrupt stack, but currently we do not initialize the stack
info for the dummy thread, hence check the address against the
interrupt stack.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2025-11-18 17:38:22 -05:00
Yong Cong Sin
4f5f42fa69 kernel: thread: constify thread arg of read-only functions
Since these helper functions are read-only, mark the `thread`
arg as `const` so that we can pass const thread to it without
triggering warnings.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2025-11-18 17:38:22 -05:00
Ederson de Souza
d6071319b5 kernel/userspace: Dynamically allocate privileged stack after user stack
When ARM CONFIG_BUILTIN_STACK_GUARD=y, it expects that the privileged
stack has a higher memory address than that of the normal user stack.
However, dynamically allocated stacks had the other way round:
privileged stack had a lower memory address.

This was probably not caught before because relevant tests, such as
`kernel.threads.dynamic_thread.stack.pool.alloc.user` run with no
hardware stack protection. If one were to test it on HW that has stack
protection, such as frdm_mcxn947 with CONFIG_HW_STACK_PROTECTION=y, they
would see it failing.

This patch naively assumes that ARC and RISC-V PMP will be happy with
the shuffling of user and privileged stack positions.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2025-11-13 23:20:45 +02:00
Nicolas Pitre
af7ae5d61f kernel: sched: plug assertion race in z_get_next_switch_handle()
Commit d4d51dc062 ("kernel:  Replace redundant switch_handle assignment
with assertion") introduced an assertion check that may be triggered
as follows by tests/kernel/smp_abort:

CPU0              CPU1              CPU2
----              ----              ____
* [thread A]      * [thread B]      * [thread C]
* irq_offload()   * irq_offload()   * irq_offload()
* k_thread_abort(thread B)
                  * k_thread_abort(thread C)
                                    * k_thread_abort(thread A)
* thread_halt_spin()
* z_is_thread_halting(_current) is false
* while (z_is_thread_halting(thread B));
                  * thread_halt_spin()
                  * z_is_thread_halting(_current) is true
                  * halt_thread(_current...);
                  * z_dummy_thread_init()
                    - dummy_thread->switch_handle = NULL;
                    - _current = dummy_thread;
                  * while (z_is_thread_halting(thread C));
* z_get_next_switch_handle()
* z_arm64_context_switch()
* [thread A is dead]
                                    * thread_halt_spin()
                                    * z_is_thread_halting(_current) is true
                                    * halt_thread(_current...);
                                    * z_dummy_thread_init()
                                      - dummy_thread->switch_handle = NULL;
                                      - _current = dummy_thread;
                                    * while(z_is_thread_halting(thread A));
                  * z_get_next_switch_handle()
                    - old_thread == dummy_thread
                    - __ASSERT(old_thread->switch_handle == NULL) OK
                  * z_arm64_context_switch()
                    - str x1, [x1, #___thread_t_switch_handle_OFFSET]
                  * [thread B is dead]
                  * %%% dummy_thread->switch_handle no longer NULL %%%
                                    * z_get_next_switch_handle()
                                      - old_thread == dummy_thread
                                      - __ASSERT(old_thread->
                                             switch_handle == NULL) FAIL

This needs at least 3 CPUs and the perfect timing for the race to work as
sometimes CPUs 1 and 2 may be close enough in their execution paths for
the assertion to pass. For example, QEMU is OK while FVP is not.
Also adding sufficient debug traces can make the issue go away.

This happens because the dummy thread is shared among concurrent CPUs.
It could be argued that a per-CPU dummy thread structure would be the
proper solution to this problem. However the purpose of a dummy thread
structure is to provide a dumping ground for the scheduler code to work
while the original thread structure might already be reused and
therefore can't be clobbered as demonstrated above. But the dummy
structure _can_ be clobbered to some extent and it is not worth the
additional memory footprint implied by per-CPU instances. We just have
to ignore some validity tests when the dummy thread is concerned.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-04 13:45:24 -05:00
Nicolas Pitre
1c8f1c8647 kernel: sched: use clearly invalid value for halting thread switch_handle
When a thread halts and dummifies, set its switch_handle to (void *)1
instead of the thread pointer itself. This maintains the non-NULL value
required to prevent deadlock in k_thread_join() while making it obvious
that this value is not meant to be dereferenced or used.

The switch_handle should be an opaque architecture-specific value and
not be assumed to be a thread pointer in generic code. Using 1 makes
the intent clearer.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-11-04 13:45:24 -05:00
Carles Cufi
cd8e773b32 kernel: events: Depend on multithreading
Kernel events depend on multithreading being enabled, and mixing them
with a non-multithreaded build gives linker failures internal to
events.c. To avoid this, make events depend on multithreading.

```
libkernel.a(events.c.obj): in function `k_event_post_internal':
175: undefined reference to `z_sched_waitq_walk'
events.c:183: undefined reference to `z_sched_wake_thread'
events.c:191: undefined reference to `z_reschedule'
libkernel.a(events.c.obj): in function `k_sched_current_thread_query':
kernel.h:216: undefined reference to `z_impl_k_sched_current_thread_query'
libkernel.a(events.c.obj): in function `k_event_wait_internal':
events.c:312: undefined reference to `z_pend_curr'
```

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2025-10-30 15:13:38 +02:00
Anas Nashif
303af992e5 style: fix 'if (' usage in cmake files
Replace with 'if(' and 'else(' per the cmake style guidelines.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-29 11:44:13 +02:00
TaiJu Wu
d4d51dc062 kernel: Replace redundant switch_handle assignment with assertion
The switch_handle for the outgoing thread is expected to be NULL
at the start of a context switch.
The previous code performed a redundant assignment to NULL.

This change replaces the assignment with an __ASSERT(). This makes the
code more robust by explicitly enforcing this precondition, helping to
catch potential scheduler bugs earlier.

Also, the switch_handle pointer is used to check a thread's state during a
context switch. For dummy threads, this pointer was left uninitialized,
potentially holding a unexpected value.

Set the handle to NULL during initialization to ensure these threads are
handled safely and predictably.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-10-25 15:59:29 +03:00
Anas Nashif
e23d663b85 tracing: ctf: add condition variables
Add hooks for condition variables.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-25 15:59:19 +03:00
Anas Nashif
a5728add11 kernel: msgq: return once to simplify tracing
Return once simplifying tracing macros.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-25 15:59:19 +03:00
TaiJu Wu
91f1acbb85 kernel: Add more debug info and thread checking in run queue
1. There are debug info within k_sched_unlock so we shoulld add
   same debug info to k_sched_lock.

2. The thread in run queue should be normal or metairq thread, we should
   check it is not dummy thread.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-10-24 13:26:15 -04:00
Fabio Baltieri
700a1a5a28 lib, kernel: use single evaluation min/max/clamp
Replace all in-function instances of MIN/MAX/CLAMP with the single
evaluation version min/max/clamp.

There's probably no race conditions in these files, but the single
evaluation ones save a couple of instructions each so they should save
few code bytes and potentially perform better, so they should be
preferred in general.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2025-10-24 01:10:40 +03:00
Nicolas Pitre
b5363d5fff kernel: usage: Fix CPU stats retrieval in z_sched_cpu_usage()
The z_sched_cpu_usage() function was incorrectly using _current_cpu
instead of the requested cpu_id parameter when retrieving CPU usage
statistics. This caused it to always return stats from the current CPU
rather than the specified CPU.

This bug manifested in SMP systems when k_thread_runtime_stats_all_get()
looped through all CPUs - it would get stats from the wrong CPU for
each iteration, leading to inconsistent time values. For example, in
the times() POSIX function, this caused time to appear to move backwards:

  t0: utime: 59908
  t1: utime: 824

The fix ensures that:
1. cpu pointer is set to &_kernel.cpus[cpu_id] (the requested CPU)
2. The check for "is this the current CPU" is correctly written as
   (cpu == _current_cpu)

This fixes the portability.posix.muti_process.newlib test failure
on FVP SMP platforms where times() was reporting backwards time.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-10-22 09:04:13 +02:00
Daniel Leung
38d49efdac kernel: mem_domain: keep track of threads only if needed
Adds a new kconfig CONFIG_MEM_DOMAIN_HAS_THREAD_LIST so that
only the architectures requiring to keep track of threads in
memory domains will have the necessary list struct inside
the memory domain structs. Saves a few bytes for those arch
not needing this.

Also rename the struct fields to be most descriptive of what
they are.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2025-10-21 22:54:44 +03:00
Nicolas Pitre
8d1da57d57 kernel: mmu: k_mem_page_frame_evict() fix locking typo
... when CONFIG_DEMAND_PAGING_ALLOW_IRQ is set.

Found during code inspection. k_mem_page_frame_evict() is otherwise
rarely used,

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2025-10-21 22:53:04 +03:00
Anas Nashif
6240b0ddb9 kernel: set DYNAMIC_THREAD_STACK_SIZE to 4096 for coverage
Increase stack sizes to allow coverage to complete.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-14 17:32:46 -04:00
Anas Nashif
f22a0afc74 testsuite: coverage: Support semihosting
Use semihosting to collect coverage data instead of dumping data to
serial console.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-10-14 17:32:46 -04:00
Łukasz Stępnicki
6571f4e1bc kernel: work: work timeout handler uninitialized variables fix
work and handler pointers are local and not initialized.
Initialize them with NULL to avoid compiler error maybe-uninitialized.

Signed-off-by: Łukasz Stępnicki <lukasz.stepnicki@nordicsemi.no>
2025-10-10 12:55:06 -04:00
Andrzej Puzdrowski
eb931d425f kernel/Kconfig.init: update description of SOC_RESET_HOOK
Updated description on conditions and assumptions in which
the soc_reset_hook is executed.

Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
2025-10-07 12:50:10 +02:00
Andrzej Puzdrowski
418eed0f90 arch/arm: introduce the pre-stack/RAM init hook
Introduce hook for customize reset.S code even before stack is
initialized or RAM is accessed. Hook can be enabled using
CONFIG_SOC_EARLY_RESET_HOOK=y.
Hook implementation is by soc_early_reset_hook() function which should
be provided by custom code.

Signed-off-by: Andrzej Puzdrowski <andrzej.puzdrowski@nordicsemi.no>
2025-10-07 12:50:10 +02:00
Chris Friedt
6c01157fef kernel: dynamic: update storage size for pool of dynamic thread stacks
Commit 5c5e17f introduced a subtle regression when userspace was
configured on architectures requiring guard pages.

Prior to 5c5e17f, the assumption was that guard pages would be included in
`CONFIG_DYNAMIC_THREAD_STACK_SIZE`, and that was something that the caller
of `k_thread_stack_alloc()` would need to be aware of, although it was not
documented at all, unfortunately.

It seems that 5c5e17f intended to remove the need for that assumption, but
the necessary conditions for doing so had not been met.

Update pool storage size to account for guard pages, which ensures that
users can access every byte of `CONFIG_DYNAMIC_THREAD_STACK_SIZE` rather
than needing to be aware that guard pages would be included in the
requested size.

The compromise is a more intuitive API at the cost of more storage space
for the pool of thread stacks when userspace is enabled.

Signed-off-by: Chris Friedt <cfriedt@tenstorrent.com>
2025-10-02 11:46:22 +03:00
TaiJu Wu
61bc4451f6 kernel: essential work queue should not stop
consider follow case
```
ZTEST(workqueue_api, test_k_work_queue_stop_sys_thread)
{
	size_t i;
	struct k_work work;
	struct k_work_q work_q = {0};
	struct k_work works[NUM_TEST_ITEMS];
	struct k_work_queue_config cfg = {
		.name = "test_work_q",
		.no_yield = true,
		.essential = true,
	};

	k_work_queue_start(&work_q, work_q_stack,
			   K_THREAD_STACK_SIZEOF(work_q_stack),
			   K_PRIO_PREEMPT(4), &cfg);

	zassert_true(k_work_queue_drain(&work_q, true) >= 0,
	    "Failed to drain & plug work queue");
	zassert_not_ok(k_work_queue_stop(&work_q, K_FOREVER),
	    "Failed to stop work queue");
}
```

If we allow stop essential work queue, system will panic.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-10-01 08:21:56 +02:00
Declan Snyder
88c61d3668 include: hooks.h: Add mocks
Add mocks of platform hooks so that #ifdef are not needed around calls
to these functions.

Signed-off-by: Declan Snyder <declan.snyder@nxp.com>
2025-09-24 19:21:07 -04:00
TaiJu Wu
623d8fa540 kernel: cleanup thread state checks and nunecessary CONFIG check
The commit replaces negative thread state checks with a new,
 more descriptivepositive check.
The expression `!z_is_thread_prevented_from_running()`
is updated to `z_is_thread_ready()` where appropriate, making
the code's intent clearer.

 Removes a redundant `IS_ENABLED(CONFIG_SMP)`, they are included #ifdef.

Finally, this patch add the missing `#endif` directive.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-09-24 09:43:30 +02:00
TaiJu Wu
e069ce242c kernel: Consolidate thread state checking functions
This patch moves `is_aborting()` and `is_halting()`
from `kernel/sched.c` to `kernel/include/kthread.h`
and renames them to `z_is_thread_aborting()` and `z_is_thread_halting()`,
for consistency with other internal kernel APIs.

It replaces the previous inline function definitions in `sched.c`
with calls to the new header functions. Additionally, direct bitwise
checks like `(thread->base.thread_state & _THREAD_DEAD) != 0U`
are updated to use the new `z_is_thread_dead()` helper function.
This enhances code readability and maintainability.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-09-24 09:43:30 +02:00
Aaron Wisner
202ba136a0 include: Fix C headers such that they can be included in C++ context.
ksched.h: Add missing extern "C" for C++.
kernel_arch_func.h: Rename reserved "new" C++ keyword.

No functional change.

Signed-off-by: Aaron Wisner <aaronwisner@gmail.com>
2025-09-19 17:47:34 -04:00
Mohamed Moawad
ccfe64627e kernel: events: add conditional guards for timeout operations
Add conditional compilation guards around timeout operations in
kernel/events.c to ensure compatibility with timer-less configurations.

Signed-off-by: Mohamed Moawad <moawad@synopsys.com>
2025-09-18 09:46:29 +01:00
Adrian Warecki
5c5e17f0f3 kernel: dynamic: Optimize stack pool usage
Add the flags parameter to the z_thread_stack_alloc_pool function.
Determine the maximum possible stack size based on the size of the reserved
memory for stack and the thread type (flags).

The stack size that can be used by a thread depend on its type
(kerner/user). For the same stack size, the macros K_KERNEL_STACK_DECLARE
and K_THREAD_STACK_DEFINE may reserve different amount of memory.

Signed-off-by: Adrian Warecki <adrian.warecki@intel.com>
2025-09-16 16:07:05 -04:00
TaiJu Wu
d361ec9692 kernel: message does not execute correct put front behavior
When the buffer is full, Thread A gets pended (blocked).
If Thread B later calls the get function, it will unpend Thread A,
allowing it to resume and put the message into the queue.
In this situation, we need to know whether Thread A should
continue with put to front or put to end.

In order to resolve this issue, we don't allow set timeout
parameter for `k_msgq_put_front` and this parameter is always
`K_NO_WAIT`.

Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
2025-09-13 21:22:52 -04:00
Al Semjonovs
911b3da139 kernel: Clean-up lingering code coverage exclusion flag
Remove LCOV_EXCL_STOP flag as LCOV_EXCL_START was removed in a previous
commit.  This causes a gcov compilation error.

Signed-off-by: Al Semjonovs <asemjonovs@google.com>
2025-09-12 08:21:21 +01:00
Marcin Szkudlinski
91d17f6931 kernel: add k_thread_absolute_deadline_set call
k_thread_absolute_deadline_set is simiar to existing
k_thread_deadline_set. Diffrence is that k_thread_deadline_set
takes a deadline as a time delta from the current time,
k_thread_absolute_deadline_set is expecting a timestamp
in the same units used by k_cycle_get_32().

This allows to calculate deadlines for several thread and
set them in deterministic way, using a common timestamp as
a "now" time base.

Signed-off-by: Marcin Szkudlinski <marcin.szkudlinski@intel.com>
2025-09-11 14:18:16 +01:00
Anas Nashif
f5d7081710 kernel: do not include ksched.h in subsys/soc code
Do not directly include and use APIs from ksched.h outside of the
kernel. For now do this using more suitable (ipi.h and
kernel_internal.h) internal APIs until more cleanup is done.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-09 11:45:06 +02:00
Anas Nashif
6b46c826aa arch: init: z_bss_zero -> arch_bss_zero
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
d98184c8cb arch: boot: rename z_early_memcpy -> arch_early_memcpy
Do not use private API prefix and move to architecture interface as
those functions are primarily used across arches and can be defined by
the architecture.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
06b179233e kernel: use cmake macro for adding kernel files
simplify cmake file and use macros for adding files that are part of the
kernel based on the configuration.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
e39de0e257 device: move device syscalls to device.c
Move device model syscalls to device.c and decouple kernel header from
device related routines. Cleanup init to have only what is needed.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
7aa3269a3f kernel: boot args kconfig cleanup
Cleanup kconfig of bootargs and put everything in one menuconfig.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
c9269b9b85 kernel: init: move boot arg handling to own file
No reason for this to be part of already packed init.c.
Moved to own file and build only when BOOTARGS are enabled.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
53a51b9287 kernel/arch: Move early init/boot code out of init/kernel headers
Cleanup init.c code and move early boot code into arch/ and make it
accessible outside of the boot process/kernel.

All of this code is not related to the 'kernel' and is mostly used
within the architecture boot / setup process.

The way it was done, some soc code was including kernel_internal.h
directly, which shouldn't be done.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Anas Nashif
cf6db903e1 kernel: move xip into arch/common
Not really a kernel feature, more for architecture, which is reflected
in how XIP is enabled and tested. Move it to architecture code to keep
which much of the 'implementation' and usage is.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-08 15:51:07 -04:00
Charles Hardin
81283c678a kernel: event api extensions to clear events and avoid phantom events
This is variation of the PR to handle phantom events and hopefully
this get merged into the PR to land.

See-also: https://github.com/zephyrproject-rtos/zephyr/pull/89624
Signed-off-by: Charles Hardin <ckhardin@gmail.com>
2025-09-05 16:50:28 -04:00
Anas Nashif
0c84cc5bc6 kernel: drop deprecated pipe API
This API was deprecated in 4.1, so drop it for the 4.3 release. Use new
PIPE API instead.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2025-09-05 17:13:05 +02:00
Loic Domaigne
6b61ec9d9b kernel: fix error propagation for device deferred initialization
This fix makes sure that do_device_init() returns a negative value if
the device's initialization failed. Previously, it mistakely returned
+errno instead of -errno.

This oversight happened during the refactoring of z_sys_init_run_level()
to support deferred initialization, from which most of do_device_init()
code derives. The rc value computed and stored in dev->state->init_res
is the POSITIVE value of the resulting errno. Returning rc therefore
breaks the convention of a negative value to signal failure.

Signed-off-by: Loic Domaigne <tech@domaigne.com>
2025-09-04 21:03:01 +02:00