This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_KERNEL_VIRT_START to K_MEM_KERNEL_VIRT_START
Z_KERNEL_VIRT_SIZE to K_MEM_KERNEL_VIRT_SIZE
Z_KERNEL_VIRT_END to K_MEM_KERNEL_VIRT_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_VIRT_RAM_START to K_MEM_VIRT_RAM_START
Z_VIRT_RAM_SIZE to K_MEM_VIRT_RAM_SIZE
Z_VIRT_RAM_END to K_MEM_VIRT_RAM_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Renames:
Z_PHYS_RAM_START to K_MEM_PHYS_RAM_START
Z_PHYS_RAM_SIZE to K_MEM_PHYS_RAM_SIZE
Z_PHYS_RAM_END to K_MEM_PHYS_RAM_END
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from the Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Rename Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() to
K_MEM_BOOT_VIRT_TO_PHYS() and K_MEM_BOOT_PHYS_TO_VIRT()
respectively. This is part of a series to move memory management
functions away from the Z_ namespace and into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This renames z_phys_map() and z_phys_unmap() to
k_mem_map_phys_bare() and k_mem_unmap_phys_bare()
respectively. This is part of the series to move memory
management functions away from the z_ namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
These functions were introduced alongside with the memory mapped
stack feature, and are currently only being used there only.
To avoid potential confusion with k_mem_un/map(), remove them
and use k_mem_map/unmap_phys_guard() directly instead.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The internal functions k_mem_map_impl() and k_mem_unmap_impl()
are renamed to k_mem_map_phys_guard() and
k_mem_unmap_phys_guard() respectively to better clarify
their usage.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
As their stacks are defined by zephyr's kernel/thread stack definition
macro, better use zephyr's kernel/thread stack size macro for their stack
size, ensuring consistency and preventing potenial issues related to stack
size misconfiguration.
Signed-off-by: Dong Wang <dong.d.wang@intel.com>
Move this to a call in the init process. arch_* calls are no services
and should be called consistently during initialization.
Place it between PRE_KERNEL_1 and PRE_KERNEL_2 as some drivers
initialized in PRE_KERNEL_2 might depend on SMP being setup.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This option allows you to look up a struct device from any of the
node labels that were attached to the devicetree node used to create
the device, etc.
This is helpful because node labels are a much more human-friendly set
of unique identifiers than the node names we are currently relying on
for use with device_get_binding(). Adding this infrastructure in the
device core allows anyone to make use of it without having to
replicate node label storage and search functions in various places in
the tree. The main use case, however, is for looking up devices by
node label in the shell.
Since there is a footprint penalty associated with storing the node
label metadata, leave this option disabled by default.
Signed-off-by: Martí Bolívar <mbolivar@amperecomputing.com>
Add a __ASSERT_ON guard around slab_ptr_is_good, as that is only used in
assertions and leaving it on seems to generate a build warning with some
clang versions:
kernel/mem_slab.c:207:20: error: unused function 'slab_ptr_is_good'
207 | static inline bool slab_ptr_is_good(struct k_mem_slab *slab,...
| ^~~~~~~~~~~~~~~~
Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
When the CONFIG_BOOT_BANNER flag is set to "n", but CONFIG_BOOT_DELAY
is enabled, there is a delay message printed at boot time.
This allows for the whole boot banner to be disabled.
Signed-off-by: Krzysztof Sychla <ksychla@antmicro.com>
Abstract slab pointer validation and apply it to block dequeue during
allocation in addition to the existing block freeing. This should help
catching some buffer overflow induced corruptions.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
As it is, blocks are allocated going backward within the buffer.
There is nothing fundamentally wrong with that, but it makes debugging
unnatural with the successively descending addresses. Create the free
list so pointers are oriented forward, at least initially.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Platforms that support IPIs allow them to be broadcast via the
new arch_sched_broadcast_ipi() routine (replacing arch_sched_ipi()).
Those that also allow IPIs to be directed to specific CPUs may
use arch_sched_directed_ipi() to do so.
As the kernel has the capability to track which CPUs may need an IPI
(see CONFIG_IPI_OPTIMIZE), this commit updates the signalling of
tracked IPIs to use the directed version if supported; otherwise
they continue to use the broadcast version.
Platforms that allow directed IPIs may see a significant reduction
in the number of IPI related ISRs when CONFIG_IPI_OPTIMIZE is
enabled and the number of CPUs increases. These platforms can be
identified by the Kconfig option CONFIG_ARCH_HAS_DIRECTED_IPIS.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
The CONFIG_IPI_OPTIMIZE configuration option allows for the flagging
and subsequent signaling of IPIs to be optimized.
It does this by making each bit in the kernel's pending_ipi field
a flag that indicates whether the corresponding CPU might need an IPI
to trigger the scheduling of a new thread on that CPU.
When a new thread is made ready, we compare that thread against each
of the threads currently executing on the other CPUs. If there is a
chance that that thread should preempt the thread on the other CPU
then we flag that an IPI is needed for that CPU. That is, a clear bit
indicates that the CPU absolutely will not need to reschedule, while a
set bit indicates that the target CPU must make that determination for
itself.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
1. The flagging of IPIs is moved out of k_thread_priority_set() into
z_thread_prio_set(). This allows for an IPI to be done for a thread
that had its priority bumped due to the handling of priority
inheritance from a mutex.
2. k_thread_priority_set()'s check for sched_locked only applies to
non-SMP builds that are using the old arch_swap() framework to switch
between threads.
Incidentally, nearly all calls to flag_ipi() are now performed with
sched_spinlock being locked. The only exception is in slice_timeout().
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Updates the CONFIG_PIPES Kconfig description to add a note that
enabling it will cause a slight increase to the thread structure.
This mirrors a similar comment in CONFIG_EVENTS.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Created `GEN_OFFSET_STRUCT` & `GEN_NAMED_OFFSET_STRUCT` that
works for `struct`, and remove the use of `z_arch_esf_t`
completely.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Make `struct arch_esf` compulsory for all architectures by
declaring it in the `arch_interface.h` header.
After this commit, the named struct `z_arch_esf_t` is only used
internally to generate offsets, and is slated to be removed
from the `arch_interface.h` header in the future.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
This duplicates the functionality of device_is_ready.
Calls for z_device_is_ready are being done in kernel mode, so it is
safe to call its implementation directly.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Namespaced the generated headers with `zephyr` to prevent
potential conflict with other headers.
Introduce a temporary Kconfig `LEGACY_GENERATED_INCLUDE_PATH`
that is enabled by default. This allows the developers to
continue the use of the old include paths for the time being
until it is deprecated and eventually removed. The Kconfig will
generate a build-time warning message, similar to the
`CONFIG_TIMER_RANDOM_GENERATOR`.
Updated the includes path of in-tree sources accordingly.
Most of the changes here are scripted, check the PR for more
info.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
There is no need to this function be defined inside the kernel since
all places using it are protecting the call under ifdef PM guards.
This way we can also remove the ifdef condition inside the implementation.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Updates z_smp_global_lock() to follow the pattern used in spinlocks
to relax the loop between atomic_cas() attempts.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
k_thread_stack_free syscall was not checking if the caller
had permission to given stack object.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
This adds a new kconfig to indicate if architecture code
supports isolating thread stacks within the same domain,
and another new kconfig to selectively enable this
behavior.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This value isn't used outside of the PM subsystem, so don't build it.
More important than the four bytes of .bss was the use of an
atomic_inc(). Some platforms are forced to use
CONFIG_ATOMIC_OPERATIONS_C (but in almost all cases are single-core
devices that won't use atomics at runtime). There, this turns into a
function call that pulls in the whole atomics implementation.
Signed-off-by: Andy Ross <andyross@google.com>
The sys_bitfield_(clear/set)_bit() work on pointer size element.
However, _thread_idx_map[] is a byte array. On little endian
systems, the bitops should work fine. However, on big endian
systems, changing the lower bits may actually be manipulating
memory outside the array when CONFIG_MAX_THREAD_BYTES is not
multiple of 4. So modify the code to perform bit ops on
a per-byte basis.
Fixes#72430
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The struct z_page_frame is marked __packed to avoid extra padding as
such padding may represent significant memory waste when lots of page
frames are used. However this is a bad strategy.
The code contained this somewhat dubious comment and code in
free_page_frame_list_put():
/* The structure is packed, which ensures that this is true */
void *node = pf;
sys_slist_append(&free_page_frame_list, node);
This is bad for many reasons:
- type checking is completely bypassed;
- if the sys_snode_t node member is no longer located at the front of
struct z_page_frame then the code will still compile and possibly run
but be broken with memory corruption as a likely outcome;
- the sys_slist_append() code is completely unaware of the packed
attribute which breaks architectures with alignment restrictions.
Let's improve code efficiency as well as memory usage by removing the
packed attribute and manually packing the flags in the unused virtual
address bits. This way the page frame array remains naturally aligned,
data access becomes optimal and the actual array size gets even smaller.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Dynamic code execution applications not using LLEXT for "extension"
loading are subject to the same linker optimization symbol resolution
issue described in commit 321e395 (in summary, libkernel.a syscalls
not used directly by the application result in weak symbol resolution
of their z_mrsh_ wrapper).
To support usecases where an application is using alternative methods
to load and execute code calling syscalls (likely from userspace) or
is using a mechanism where the linker may not be aware, the configuration
option has been decoupled from CONFIG_LLEXT (who is now a selector) to
KERNEL_WHOLE_ARCHIVE.
Signed-off-by: Daniel Apperloo <daniel.apperloo@intel.com>
- modified parameter types to receive a const pointer when a
non-const pointer is not needed
- avoided redundant casts
Signed-off-by: Hess Nathan <nhess@baumer.com>
limit is unsigned int and K_SEM_MAX_LIMIT is defined as UINT_MAX this
means that limit will never be greater K_SEM_MAX_LIMIT.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Spell checking tools do not recognize "iff", replace with "if and only if".
See https://en.wikipedia.org/wiki/If_and_only_if
Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
update kernel timeout logic based on retrieve system timer clock
frequency at runtime or static way based on Kconfig
TIMER_READS_ITS_FREQUENCY_AT_RUNTIME
Signed-off-by: Najumon B.A <najumon.ba@intel.com>