register_event always returns 0, so the conditional will
always take the first branch and code in the else part
is never reached.
Fixes#31282
Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
1. Exclude the CODE UNREACHABLE line while generating coverage report.
2. Exclude the memory domain deprecated API when calculating code
coverage.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
First, the maximum heap size must fit in 31 bits worth of chunks
because the internal 32-bit field holding the size is shared with
the `used` bit.
Then the mention of a 256-byte block in the doc is no longer
relevant. That pertained to the previous allocator implementation.
And ditto for the HEAP_MEM_POOL_MIN_SIZE kconfig option.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Needing to check the current cycle time (which involves a spinlock and
register read on most architectures) is wasteful in the scheduler
priority predicate, which is a hot path. If we "burn" one bit of
precision (and document the rule), we can do the comparison without
knowing the current time.
2^31 cycles is still far longer than a live deadline thread in any
legitimate realtime app should ever live before being scheduled.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Adds a linker section for Cortex-M instruction tightly coupled memory
(ITCM), similar to the existing section for DTCM. A new executable MPU
region is not added as there isn't currently a need to make this section
accessible to user mode. This section can be enabled by setting a device
tree chosen node zephyr,itcm.
Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
This allows allocating dynamic kernel objects with memory alignment
requirements. The first candidate is for thread objects where,
on some architectures, it must be aligned for saving/restoring
registers.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
PM depends on SYS_CLOCK_EXISTS in Kconfig but several boards have
Kconfig overrides that allow the dependency to be ignored, so
CONFIG_PM=y even though CONFIG_SYS_CLOCK_EXISTS=n. Fix the code so
that the true dependency is reflected in the generated code.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
This change adds z_heap_aligned_alloc() and k_aligned_alloc()
and changes z_heap_malloc() and k_malloc() to be small wrappers around
the aligned variants.
Fixes#29519
Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
Ticks should be assigned directly to timeout value in case of
CONFIG_LEGACY_TIMEOUT_API=y, just as they were before referenced patch.
Fixes: 7a815d5d99 ("kernel: sched: Use k_ticks_t in z_tick_sleep")
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.
mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.
Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Inside the idle loop, in some configuration, IRQ is unlocked and
then immediately locked again. There is a side effect:
1. IRQ is unlocked in middle of the loop.
2. Another thread (A) can now run so idle thread is un-scheduled.
3. Thread A runs to its end and going through the thread
self-abort path.
4. Idle thread is rescheduled again, and continues to run
the remaining loop when it eventuall calls k_cpu_idle().
The "pending abort" path is not being executed on thread A
at this point.
5. Now, thread A is suspended, and the CPU is in idle waiting
for interrupts (e.g. timeouts).
6. Thread B is waiting to join on thread A. Since thread A has
not been terminated yet so thread B is waiting until
the idle thread runs again and starts executing from
the beginning of while loop.
7. Depending on how many threads are running and how active
the platform is, idle thread may not run again for a while,
resulting in thread B appearing to be stuck.
To avoid this situation, the unlock/lock pair in middle of
the loop is removed so no rescheduling can be done mid-loop.
When there is no thread abort pending, it simply locks IRQ
and calls k_cpu_idle(). This is almost identical to the idle
loop before the thread abort code was introduced (except
the check for cpu->pending_abort).
Fixes#30573
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In order to release irq_offload semaphore outside kernel/thread.c, we
make it visible by modifying it non-static under ztest. This would be
needed such as when call irq_offload() to enter interrupt context and
a fatal error happened, then you have to release it in your fatal
handler, or the irq_offload will still be locked and no longer be
using again.
Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
Cleanup code for power management and remove some duplication and
isolate power management code from the kernel code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE
and use PM_ as the prefix for all PM related Kconfigs
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_heap did not have an aligned alloc function, even though
this is supported by the internal sys_heap.
Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
These implemented a k_mem_pool in terms of the now universal k_heap
utility. That's no longer necessary now that the k_mem_pool API has
been removed.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The mailbox and msgq utilities had API variants that could pass old
mem_pool blocks through the data structure. That API is being
deprected (and the features were obscure), so remove the internal
support.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The k_mem_pool allocator is no more, and the z_mem_pool compatibility
API is going away. The internal allocator should be a k_heap always.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These were implemented in terms of the mem_pool/block API directly
(for complicated reasons, the pointers returned from this API may have
been allocated from allocators other than the single system heap).
Have them use a k_heap instead.
Requires a tweak to one test which had hard-coded an assumption about
the header size.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Mark all k_mem_pool APIs deprecated for future code. Remaining
internal usage now uses equivalent "z_mem_pool" symbols instead.
Fixes#24358
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Remove the MEM_POOL_HEAP_BACKEND kconfig, treating it as true always.
Now the legacy mem_pool cannot be enabled and all usage uses the
k_heap/sys_heap backend.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Changed algorithm of checking for pending thread in mem_slab.
Check for pending thread now is done only if there is
no memory left.
Signed-off-by: Kamil Lazowski <Kamil.Lazowski@nordicsemi.no>
z_tick_sleep was using int32_t what could cause a possible overflow
when converting from k_ticks_t.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This patch activates FPU feature for main thread if FPU related
configs (FPU, FPU_SHARING) are enabled.
Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
With MMU features enabled, we are using 248 out of 256
available bytes on 32-bit. This is extremely uncomfortable, relax
to a larger value like several other arches.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Adds a K_DELAYED_WORK_DEFINE, matching the K_WORK_DEFINE macro, with
accompanying Z_DELAYED_WORK_INITIALIZER macro.
Makes k_delayed_work_init a static inline function, like its K_WORK
counterpart.
Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
Move banner and boot delay handling out of init.c
The code for banner was all over the place in init.c making it
unreadable.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Most of kernel files where declaring os module without providing
log level. Because of that default log level was used instead of
CONFIG_KERNEL_LOG_LEVEL.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Add new function to mem_slab API that enables user
to get maximum number of slabs used so far.
Signed-off-by: Kamil Lazowski <Kamil.Lazowski@nordicsemi.no>
Use GEN_OFFSET_SYM macro to genarate absolute symbols for the
_callee_saved struct and use these new symbols in the assembly code.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
This uses the timing functions to gather execution cycles of
threads. This provides greater details if arch/SoC/board
uses timer with higher resolution.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds the bits to gather the first thread runtime statictic:
thread execution time. It provides a rough idea of how much time
a thread is spent in active execution. Currently it is not being
used, pending following commits where it combines with the trace
points on context switch as they instrument the same locations.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Documentation for kconfig SYS_CLOCK_TICKS_PER_SEC has some outdated
recommendations. Changing them to align with other documentation
under kernel timing.
Fixes: #25482
Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
This legacy struct still had a non-standard name. Clean it up to
conform to currrent naming guidelines.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Fix the issue where the kernel poll code would place the tracking
struct on the caller stack and share it with other threads, thus
creating a cache coherence issue on systems where KERNEL_COHERENCE is
enabled.
This works by eliminating the thread backpointer in struct _poller and
simply placing the (now just two-byte!) struct directly into the
thread struct.
Note that this doesn't attempt to fix the API paradigm that the
natural way to structure a call to k_poll() is to use an array of
k_poll_events on the CALLER's stack. So it's likely that most
"typical" k_poll code is still going to have problems with
KERNEL_COHERENCE. But at least now the kernel internals aren't
fundamentally broken.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The poll code was playing this weird trick where the thread pointer in
the "struct _poller" object for a triggered work item. It would not
be a thread to wake up, but instead a pointer to the (non-polling)
thread operated by the work queue being triggered. The code would
never touch this thread, just use it as a way to get a pointer to the
enclosing work queue struct.
Just store the work queue pointer in the first place. It's much
simpler, and makes future modifications to remove that thread pointer
possible.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>