If something is tagged as nocache it didn't got cleared, which could
lead some weird behaviour where bss memory is non-zero.
Signed-off-by: Peter van der Perk <peter.vanderperk@nxp.com>
Handle both of these sections in a single chunk of code instead of
separately. We don't need to use the legacy .ctors ABI as both
the constructors array and startup logic are managed within a single
link result.
This can now also be used with ARC MWDT which had been using the .ctors
sections but with .init_array semantics. For ARC MWDT, we now always
discard .dtors and .fini sections as Zephyr will never cause global
destructors to execute. Stop discarding .eh_frame sections so that
exception handling works as expected.
When building a NATIVE_APPLICATION, we ask the native C library to run all
of the constructors to ensure any non-Zephyr constructors are run before
main is invoked. It might be "nice" to split the constructors so that the
Zephyr constructors were executed by the Zephyr code while the non-Zephyr
ones were executed by the native C library. I think that could be done if
we knew the pathnames of either the Zephyr or non-Zephyr files. That might
make a good future enhancement.
Signed-off-by: Keith Packard <keithp@keithp.com>
Remove restrictions from device_init by allowing to perform device
initialization if the device state flags it being not initialized.
This makes the API usable in contexts where device_deinit has been
called before.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Instead of passing a single init function, create
struct device_ops with the init function inside. This allows to easily
extend device's capabilities in the future without too much breakage,
e.g. to add a de-init call.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Introduce a new field to store device flags. Only device deferred init
flag has been added, replacing usage of linker hackery to know wether a
device requires initialization at boot time or not. This change will be
helpful in the near future as devices will become reference counted, so we
will need to know wether they have been initialized or not.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Such union is rather redundant, considering a simple const cast can be
done when initializing the init entry. Note that the init_entry does not
need to be touched now that struct device stores the init call. It is
merely an init entry sorted by linker scripts, so we can intertwine
devices and SYS_INIT.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Device init function is no longer taken from `struct init_entry`, so
there's no need to keep such union.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
It just complicates things. It is not C99 strandard, and since C11 is
not mandatory, it is better to play safe here.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Previously, when stack canaries were enabled, Zephyr applied this
protection to all functions. This commit introduces a new option that
allows stack canary protection to be applied selectively to specific
functions based on certain criteria.
Signed-off-by: Flavio Ceolin <flavio.ceolin@gmail.com>
Sleeping and suspended are now orthogonal states. That is, a thread
may be both sleeping and suspended and the two do not interact. One
repercussion of this is that suspending a thread will no longer
abort its timeout.
Threads are now created in the 'sleeping' state instead of a
'suspended' state. This dovetails nicely with the start delay that
can be given to a newly created thread--it is as though the very
first operation that a thread with a start delay is a sleep.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Traditionally threads have been initialized with a PRESTART flag set,
which gets cleared when the thread runs for the first time via either
its timeout or the k_thread_start() API.
But if you think about it, this is no different, semantically, than
SUSPENDED: the thread is prevented from running until the flag is
cleared.
So unify the two. Start threads in the SUSPENDED state, point
everyone looking at the PRESTART bit to the SUSPENDED flag, and make
k_thread_start() be a synonym for k_thread_resume().
There is some mild code size savings from the eliminated duplication,
but the real win here is that we make space in the thread flags byte,
which had run out.
Signed-off-by: Andy Ross <andyross@google.com>
Up until now, the `__thread` keyword has been used for declaring
variables as Thread local storage. However, `__thread` is a GNU
specific keyword which thus limits compatibility with other
toolchains (for instance IAR).
This PR intoduces a new macro `Z_THREAD_LOCAL` which expands to the
corresponding C11, C23 or C++11 standard keyword based on the standard
that is specified during compilation, else it uses the old `__thread`
keyword.
Signed-off-by: Daniel Flodin <daniel.flodin@iar.com>
`CONFIG_MP_NUM_CPUS` has been deprecated for more than 2
releases, it's time to remove it.
Updated all usage of `CONFIG_MP_NUM_CPUS` to
`CONFIG_MP_MAX_NUM_CPUS`
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
Do not use SYS_INIT for initializing irq_offload when enabled, instead
using a new interface that is called during the boot process for all
architectures.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Introduce soc and board hooks to replace arch specific code
and replace usages of SYS_INIT for platform initialization.
include/zephyr/platform/hooks.h introduces the hooks to be implemented
by boards and SoCs.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Add support for passing args to main(). The
content of bootargs is taken from get_bootargs()
which should be implemented for each loader and
then its split into args and passed to main.
Signed-off-by: Jakub Michalski <jmichalski@internships.antmicro.com>
Signed-off-by: Filip Kokosinski <fkokosinski@antmicro.com>
* Move ctors and init_array from the CPP library
to the kernel library, as this is common for both C
and C++ and it is the kernel who is running it.
* Rename the hidden kconfig option CPP_STATIC_INIT_GNU
STATIC_INIT_GNU instead.
* If STATIC_INIT_GNU is not selected verify there is
constructors left behind.
* Rename common-rom-cpp.ld to common-rom-init.ld
* Rename z_cpp_init_static to z_init_static,
and have the kernel always call it.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
Signed-off-by: Keith Packard <keithp@keithp.com>
Created tracing APIs to trace the enter and (exit + result) of
SYS_INIT and DEVICE_DT_DEFINE (and friends).
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
This renames z_phys_map() and z_phys_unmap() to
k_mem_map_phys_bare() and k_mem_unmap_phys_bare()
respectively. This is part of the series to move memory
management functions away from the z_ namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
As their stacks are defined by zephyr's kernel/thread stack definition
macro, better use zephyr's kernel/thread stack size macro for their stack
size, ensuring consistency and preventing potenial issues related to stack
size misconfiguration.
Signed-off-by: Dong Wang <dong.d.wang@intel.com>
Move this to a call in the init process. arch_* calls are no services
and should be called consistently during initialization.
Place it between PRE_KERNEL_1 and PRE_KERNEL_2 as some drivers
initialized in PRE_KERNEL_2 might depend on SMP being setup.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Namespaced the generated headers with `zephyr` to prevent
potential conflict with other headers.
Introduce a temporary Kconfig `LEGACY_GENERATED_INCLUDE_PATH`
that is enabled by default. This allows the developers to
continue the use of the old include paths for the time being
until it is deprecated and eventually removed. The Kconfig will
generate a build-time warning message, similar to the
`CONFIG_TIMER_RANDOM_GENERATOR`.
Updated the includes path of in-tree sources accordingly.
Most of the changes here are scripted, check the PR for more
info.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
This value isn't used outside of the PM subsystem, so don't build it.
More important than the four bytes of .bss was the use of an
atomic_inc(). Some platforms are forced to use
CONFIG_ATOMIC_OPERATIONS_C (but in almost all cases are single-core
devices that won't use atomics at runtime). There, this turns into a
function call that pulls in the whole atomics implementation.
Signed-off-by: Andy Ross <andyross@google.com>
Nicolas Pitre points out that since these thread structs are just
dummies for the context swtiching, they can be presumed to be "write
only" and thus there's no point in having one per CPU, everyone can
share the same one.
The only gotcha is that we never really documented (nor really have a
place to document) that rule, so it's not theoretically impossible for
an architecture to read back what it might have written underneath
arch_switch(). Leave this in a separate commit for bisection
purposes, but the risk seems very low.
Signed-off-by: Andy Ross <andyross@google.com>
After a k_thread_abort(), the resulting thread struct is documented as
unused/free memory that may be re-used (for example, to respawn a new
thread).
But in the special case of aborting the current thread from within an
ISR, that wasn't quite happening. The scheduler cleanup would
complete, but the architecture layer would still try to context switch
away from the aborted thread on exit, and that can include writes to
the now-reused thread struct! The specifics will depend on
architecture (some do a full context save on entry, most don't), but
in the case of USE_SWITCH=y it will at the very least write the
switch_handle field.
Fix this simply, with a per-cpu "switch dummy" thread struct for use
as a target for context switches like this. There is some non-trivial
memory cost to that; thread structs on many architectures are large.
Pleasingly, this also addresses a known deadlock on SMP: because the
"spin in ISR" step now happens as the very last stage of
k_thread_abort() handling, the existing scheduler lock works to
serialize calls such that it's impossible for a cycle of threads to
independently decide to spin on each other: at least one will see
itself as "already aborting" and break the cycle.
Fixes#64646
Signed-off-by: Andy Ross <andyross@google.com>
This reverts commit 61c70626a5.
This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
This reverts commit 20611f13ca.
This PR introduced 2 regressions in main CI:
71977 & 71978
Let's revert it by now to get main's CI passing again.
Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
Nicolas Pitre points out that since these thread structs are just
dummies for the context swtiching, they can be presumed to be "write
only" and thus there's no point in having one per CPU, everyone can
share the same one.
The only gotcha is that we never really documented (nor really have a
place to document) that rule, so it's not theoretically impossible for
an architecture to read back what it might have written underneath
arch_switch(). Leave this in a separate commit for bisection
purposes, but the risk seems very low.
Signed-off-by: Andy Ross <andyross@google.com>
After a k_thread_abort(), the resulting thread struct is documented as
unused/free memory that may be re-used (for example, to respawn a new
thread).
But in the special case of aborting the current thread from within an
ISR, that wasn't quite happening. The scheduler cleanup would
complete, but the architecture layer would still try to context switch
away from the aborted thread on exit, and that can include writes to
the now-reused thread struct! The specifics will depend on
architecture (some do a full context save on entry, most don't), but
in the case of USE_SWITCH=y it will at the very least write the
switch_handle field.
Fix this simply, with a per-cpu "switch dummy" thread struct for use
as a target for context switches like this. There is some non-trivial
memory cost to that; thread structs on many architectures are large.
Pleasingly, this also addresses a known deadlock on SMP: because the
"spin in ISR" step now happens as the very last stage of
k_thread_abort() handling, the existing scheduler lock works to
serialize calls such that it's impossible for a cycle of threads to
independently decide to spin on each other: at least one will see
itself as "already aborting" and break the cycle.
Fixes#64646
Signed-off-by: Andy Ross <andyross@google.com>
Currently, all devices are initialized at boot time (following their
level and priority order). This patch introduces deferred
initialization: by setting the property `zephyr,deferred-init` on a
device on the devicetree, Zephyr will not initialized the device.
To initialize such devices, one has to call `device_init()`.
Deferred initialization is done by grouping all deferred devices on a
different ELF section. In this way, there's no need to consume more
memory to keep track of deferred devices. When `device_init()` is
called, Zephyr will scan the deferred devices section and call the
initialization function for the matching device. As this scanning is
done only during deferred device initialization, its cost should be
bearable.
Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
Simple rename to align the kernel naming scheme. This is being
used throughout the tree, especially in the architecture code.
As this is not a private API internal to kernel, prefix it
appropriately with K_.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a closing comment to the endif with the configuration
information to which the endif belongs too.
To make the code more clearer if the configs need adaptions.
Signed-off-by: Simon Hein <Shein@baumer.com>
Move out of thread and put directly in init.c where it is being used.
Also remove definition from kernel.h, this is an internal function and
should not be in a public header.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
The functions to manipulate the essential flag indeed operate on
threads, but they are misplaced in the thread implementation file. Put
them alongside other routines setting other thread flags and cleanup
headers a bit.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Some platforms already have .bss section zeroed-out externally before the
Zephyr initialization and there is no sence to zero it out the second time
from the SW.
Such boot-time optimization could be critical e.g. for RTL Simulation.
Signed-off-by: Alexander Razinkov <alexander.razinkov@syntacore.com>
The early random get function was making many wrong assumptions
about random subsys and entropy drivers. First, it was assuming
that entropy_get_entropy() would be ISR safe, that is not right,
the driver has an ISR safe callback and if it is not implemented
or not working it is not ok using the other callback.
Second, the fallback to the random subsys is even more problematic
since they can use kernel services to protect internal states and be
thread-safe.
Another incorrect thing in this function was the guard around it.
It was needed by features like stack randomization and stack canaries,
and not when those conditions were match. Just remove it and in case
it is not needed the linker will take care of it.
The drawback of this change is that in the absence of an entropy
generator with support to be called from ISR the randomness is very
weak.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Allow targets come up with their own early random generator
since the default can be NOT so random due constraints.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Rename z_early_boot_rand_get with z_early_rand_get to get consistent
with other early functions.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
rand32.h does not make much sense, since the random subsystem
provides more APIs than just getting a random 32 bits value.
Rename it to random.h and get consistently with other
subsystems.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Integrates object core statistics framework into the following
kernel objects:
sys_mem_blocks, k_mem_slab
threads, _cpu, z_kernel
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Refactors CPU usage (thread runtime stats) to make it easier to
integrate with the object core statistics framework.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Add new option to use thread local storage for stack
canaries. This makes harder to find the canaries location
and value. This is made optional because there is
a performance and size penalty when using it.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Only set a cpu as active (on pm subsystem) when the cpu is effectively
initialized. We cannot assume on pm subsystem that all cpus were
initialized since when the option CONFIG_SMP_BOOT_DELAY is used cpus are
initialized on demand by the application.
Note that once cpus are properly initialized the subystem is able to track
their status.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Exception handler(arch/x86/core/ia32/excstub.S) may access
_kernel variable, it will lead to failure when enabled paging,
so make this critical variable pinned.
Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>