Using char pointers for %p should be avoided in log messages. It will
cause issues in configurations where logging strings are removed from
the binary and they are not inspected when cbprintf packages from
logging string are built. In that case any char pointers are treated as
strings and copied into the pacakge body.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
Introduce a Kconfig (MP_MAX_NUM_CPUS) and an api arch_num_cpus() to
allow for systems that might determine the number of CPUs available to
Zephyr at runtime.
CONFIG_MP_MAX_NUM_CPUS is intented to be use for any array initialization
and such that need to occur at build time. For most systems
arch_num_cpus() will just report the value of CONFIG_MP_MAX_NUM_CPUS.
The intent is to phase out CONFIG_NP_NUM_CPUS.
Signed-off-by: Kumar Gala <kumar.gala@intel.com>
Warnings being treated as errors when building :
Error this 'for' clause does not guard...
[-Werror=misleading-indentation]
Signed-off-by: Francois Ramu <francois.ramu@st.com>
The _SYS_INIT_LEVEL* definitions were used to indicate the index entry
into the levels array defined in init.c (z_sys_init_run_level). init.c
uses this information internally, so there is no point in exposing this
in a public header. It has been replaced with an enum inside init.c. The
device shell was re-using the same defines to index its own array. This
is a fragile design, the shell needs to be responsible of its own data
indexing. A similar situation happened with some unit tests.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The function in charge of calling all init function was defined in
device.c, had a public prototype and was just used in init.c. Since this
is really an internal function tied to Kernel init code, move it to
init.c and make it static, there's no need to expose it publicly.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
The `ARCH` init level was added to solve a specific problem, call init
code (SYS_INIT/devices) before `z_cstart` in the `intel_adsp` platform.
The documentation claims it runs before `z_cstart`, but this is only
true if the SoC/arch takes care of calling:
```c
z_sys_init_run_level(_SYS_INIT_LEVEL_ARCH);
```
Which is only true for `intel_adsp` nowadays. So in practice, we now
have a platform specific init level. This patch proposes to do things in
a slightly different way. First, level name is renamed to `EARLY`, to
emphasize it runs in the early stage of the boot process. Then, it is
handled by the Kernel (inside `z_cstart()` before calling
`arch_kernel_init()`). This means that any platform can now use this
level. For `intel_adsp`, there should be no changes, other than
`gcov_static_init()` will be called before (I assume this will allow to
obtain coverage for code called in EARLY?).
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
For historical reasons[1] suspending threads would release the
scheduler lock between pend() (which places the current thread onto a
wait queue) and z_swap() (which effects the context swtich). This
process happens with the caller's lock held, so local interrupts are
masked. But on SMP this opens a tiny race where another CPU could
grab the pended thread and switch to it while we were still executing
on its stack!
Fix this by elevating the "lock swap" code that already exists in the
(portable/switch-based) z_swap() code one level so that it happens in
z_pend_curr() also. Now we hold the scheduler lock between pend and
the final context switch.
Note that this technique can't work for the older z_swap_irqlock()
implementation, which exists to vestigially support a few bits of arch
code (mostly direct interrupts) that don't work on SMP anyway.
Address with an assert to prevent future misuse.
[1] z_swap() is a historical API implemented in per-arch assembly for
older architectures (like ARM32!). It was designed to be called
with what at the time was a global IRQ lock, so it doesn't
understand the idea of a separate scheduler lock. When we finally
get all archictures on arch_switch() this design can be cleaned up
quite a bit.
Signed-off-by: Andy Ross <andyross@google.com>
We have cases where some devices needs to be initialized very early and
before c_start is call, i.e. to setup very early console or to setup
memory. Traditionally this would be hardcoded as part of the soc layer
and not using device model or the init levels.
This patch adds a new level ARCH, which will be called in early
architecture code and before we jump to the kernel code.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
k_heap_aligned_alloc was not handling K_FOREVER timeout
correctly due to unsigned return value. Added explicit
K_FOREVER handling of end time.
Fixes#50611.
Signed-off-by: Jay Shoen <jay.shoen@perceive.io>
The interrupt stack is used as the system stack during kernel
initialization while IRQs are not yet enabled. The sp register is
set to z_interrupt_stacks + CONFIG_ISR_STACK_SIZE.
CONFIG_ISR_STACK_SIZE only represents the desired usable stack size.
This does not take into account the added guard area. Result is a stack
whose pointer is much closer to the trigger zone than expected when
CONFIG_PMP_STACK_GUARD=y, and the SMP configuration in particular pushes
it over the edge during many CI test cases.
Worse: during early init we're not quite ready to handle exceptions
yet and complete havoc ensues with no meaningful debugging output.
Make sure the early assembly code locates the actual top of the stack
by generating a constant with its true size.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
Obtaining the CPU outside of the spin locks on SMP would
result in an assert failing on __ASSERT(!z_smp_mobile())
which makes sense as the current cpu may change.
Signed-off-by: Tom Burdick <thomas.burdick@intel.com>
The requirement for k_yield() to handle "yielding" in the idle thread
was removed a while back, but it missed a spot where we'd try to yield
in the fallback loop on bringup platforms that lack an IPI. This now
crashes, because yield now unconditionally tries to reschedule the
current thread, which doesn't work for idle threads that don't live in
the run queue.
Just make it a busy loop calling swap(), even simpler.
Fixes#50119
Signed-off-by: Andy Ross <andyross@google.com>
When building with CONFIG_SCHED_CPU_MASK_PIN_ONLY=y, CPU mask
is fixed and cannot be changed while thread is running.
The current code asserts if thread state is anything but PREPARED.
We do however have interface like k_work_queue_start() where a thread is
started as part of the queue start. To allow user to set the pinned CPU
for the work queue thread, it needs to be possible to suspend the
thread, set the mask, and then call k_thread_resume(). This seems to be
a valid sequence, so relax the assert check to reflect this.
Signed-off-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
k_poll does not currently allow polling on pipes. This adds support
for doing so on buffered pipes.
Signed-off-by: Jeremy Herbert <jeremy.006@gmail.com>
When a cache API function is called from userspace, this results on
ARM64 in an OOPS (bad syscall error). This is due to at least two
different factors:
- the location of the cache handlers is preventing the linker to
actually find the handlers
- specifically for ARM64 and ARC some cache handling functions are not
implemented (when userspace is not used the compiler simply optimizes
out these calls)
Fix the problem by:
- moving the userspace cache handlers to a their logical and proper
location (in the drivers directory)
- adding the missing handlers for ARM64 and ARC
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
Many device pointers are initialized at compile and never changed. This
means that the device pointer can be constified (immutable).
Automated using:
```
perl -i -pe 's/const struct device \*(?!const)(.*)= DEVICE/const struct
device *const $1= DEVICE/g' **/*.c
```
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
There's no point to doing this when the host OS clears all memory at
mapping time. And as it turns out, the __bss_end symbol it was
relying on actually comes from the host toolchain's linker, not our
own linker scripts (making it semi-dangerous to rely on). And it's
not present in clang/lld output anyway.
Signed-off-by: Andy Ross <andyross@google.com>
This new implementation of pipes has a number of advantages over the
previous.
1. The schedule locking is eliminated both making it safer for SMP
and allowing for pipes to be used from ISR context.
2. The code used to be structured to have separate code for copying
to/from a wating thread's buffer and the pipe buffer. This had
unnecessary duplication that has been replaced with a simpler
scatter-gather copy model.
3. The manner in which the "working list" is generated has also been
simplified. It no longer tries to use the thread's queuing node.
Instead, the k_pipe_desc structure (whose instances are on the
part of the k_thread structure) has been extended to contain
additional fields including a node for use with a linked list. As
this impacts the k_thread structure, pipes are now configurable
in the kernel via CONFIG_PIPES.
Fixes#47061
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Say threadA holds a mutex and threadB tries
to lock it with a timeout, a race would occur
if threadA unlock that mutex after threadB
got unpended by sys_clock and before it gets
scheduled and calls k_spin_lock.
This patch fixes this issue by checking the
mutex's status again after k_spin_lock calls.
Fixes#48056
Signed-off-by: Qi Yang <qi.yang@cmind-semi.com>
Fixes#46324
Set dummy_thread->base.slice_ticks to 0 when
CONFIG_TIMESLICE_PER_THREAD is set. To avoid
_current_cpu->slice_ticks be a big number.
Signed-off-by: Hu Zhenyu <zhenyu.hu@intel.com>
Fixes an issue in sys_clock_tick_get() that could lead to drift in
a k_timer handler. The handler is invoked in the timer ISR as a
callback in sys_tick_announce().
1. The handler invokes k_uptime_ticks().
2. k_uptime_ticks() invokes sys_clock_tick_get().
3. sys_clock_tick_get() must call elapsed() and not
sys_clock_elapsed() as we do not want to count any
unannounced ticks that may have elapsed while
processing the timer ISR.
Fixes#46378
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Updates sys_clock_announce() such that the <announce_remaining> update
calculation is done after the callback. This prevents another core from
entering the timeout processing loop before the first core leaves it.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
There is no easy way to clear event bits without
the potential for a race to exist between producer(s)
and consumer(s). The result of this race is that events
can be lost through the various resetting mechanisms
available (flag to k_event_wait(), or k_event_set()).
Add k_event_set_masked() which permits bits to be set or cleared.
This allows consumers to clear just the bits that they have read
without (accidentally) discarding any new bits.
Update unit tests to verify the functionality.
Partly Fixes#46117.
Signed-off-by: Andrew Jackson <andrew.jackson@amd.com>
Although there is nothing wrong with the existing code,
it doesn't permit individual bits to be set (or cleared).
This makes further changes slightly awkward.
Use a mask to restrict the bits set in an event.
Signed-off-by: Andrew Jackson <andrew.jackson@amd.com>
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)
Use `bool' instead of `int' to represent Boolean values.
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.
This commit is a subset of the original commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a
Signed-off-by: Simon Hein <SHein@baumer.com>
irq_lock() returns an unsigned integer key.
Generated by spatch using semantic patch
scripts/coccinelle/irq_lock.cocci
Signed-off-by: Johann Fischer <johann.fischer@nordicsemi.no>
Adds memory usage runtime stats routines that parallel those used
by both the heap and mem_blocks. This helps maintain some level of
of consistency across the different memory types.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
Move scripts needed by the build system and not designed to be run
individually or standalone into the build subfolder.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Update the two locations that use two `SYS_INIT` macros with the same
initilisation functions to use `SYS_INIT_NAMED`.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
Add a minimal EFI console driver to support printf, this console driver
only supports console output. Otherwise the printf will not work.
Signed-off-by: Enjia Mai <enjia.mai@intel.com>
Adds compatibility with Intel ADSP GDB from Zephyr SDK and
from Cadence toolchain to coredump_gdbserver.py.
Adds CAVS 15-25 (APL) register definitions. Implements
handle_register_single_read_packet to serve ADSP GDB
p packets.
Prevents BSA from changing between stack dump printout
and coredump by taking lock. Observed to be necessary for
accurate results on slower simulated platforms.
Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
Logging v1 has been removed and log_strdup wrapper function is no
longer needed. Removing the function and its use in the tree.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
This commit updates all deprecated `K_KERNEL_PINNED_STACK_ARRAY_EXTERN`
macro usages to use the `K_KERNEL_PINNED_STACK_ARRAY_DECLARE` macro
instead.
Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
Files including <zephyr/kernel.h> do not have to include
<zephyr/zephyr.h>, a shim to <zephyr/kernel.h>.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Rename the symbols used to denote the locations of the global
constructor lists and modify the Zephyr start-up code accordingly.
On POSIX systems this ensures that the native libc init code won't
find any constructors to run before Zephyr loads.
Fixes#39347, #36858
Signed-off-by: David Palchak <palchak@google.com>
Use a new environment variable,
ZEPHYR_TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE, to set the value for
TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE instead of setting it to 'n' for
all non-Zephyr toolchains. In particular, the Debian arm-none-eabi
toolchain has TLS support and with this option, can be used to build
Zephyr with thread local variables.
Signed-off-by: Keith Packard <keithp@keithp.com>
Documentation specifies that aborting/terminating/exiting essential
threads is a system panic condition, but we didn't actually implement
that and allowed it as for other threads. At least one app wants to
exploit this documented behavior as a "watchdog" kind of condition,
and that seems reasonable. Do what we say we're supposed to do.
This also includes a small fix to a test, which seemed like it was
written to exercise exactly this condition. Except that it failed to
detect whether or not a system fatal error was actually signaled and
was (incorrectly) indicating "success". Check that we actually enter
the handler.
Fixes#45545
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The function k_thread_runtime_stats_all_get() now populates the
current_cycles field in the thread runtime stats structure.
Resets the number of cycles in the CPU's current usage window once
the idle thread is scheduled.
Fixes the average_cycles calcuation.
Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
For a library which already provides a multi-thread aware errno, use
that instead of creating our own internal value.
Signed-off-by: Keith Packard <keithp@keithp.com>
This adds the internal function z_work_submit_to_queue(), which
submits the work item to the queue but doesn't force the thread to yield,
compared to the public function k_work_submit_to_queue().
When called from poll.c in the context of k_work_poll events, it ensures
that the thread does not yield in the context of the spinlock of object
that became available.
Fixes#45267
Signed-off-by: Lucas Dietrich <ld.adecy@gmail.com>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Implements a function that application and driver code can use to check
whether it is valid to yield (or block) in the current context. This
check is required for functions that can feasibly be run from multiple
contexts. The primary intended use case is power management transition
functions, which can be run by application code explicitly or
automatically in the idle thread by system PM.
Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
This adds lazy floating point context switching. On svc/irq entrance,
the VFP is disabled and a pointer to the exception stack frame is saved
away. If the esf pointer is still valid on exception exit, then no
other context used the VFP so the context is still valid and nothing
needs to be restored. If the esf pointer is NULL on exception exit,
then some other context used the VFP and the floating point context is
restored from the esf.
The undefined instruction handler is responsible for saving away the
floating point context if needed. If the handler is in the first
irq/svc context and the current thread uses the VFP, then the float
context needs to be saved. Also, if the handler is in a nested context
and the previous context was using the FVP, save the float context.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Do not allow changing the CPU which a thread is pinned when it is
already being executed. This allows further optimizations in some
platforms with incoherent memory since we can safely assume that the
thread will run in the same CPU and avoid invalidate / flush the
cache during context switches.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The k_timer utility was written to assume that the kernel timeout
handler would never be delayed by more than a tick, so it can naively
reschedule the next interrupt with a simple delay.
Unfortunately real platforms have glitchy hardware and high tick
rates, and on intel_adsp we're seeing this promise being broken in
some circumstances.
It's probably not a good idea to try to plumb the timer driver
interface up into the IPC layer to do this correction, but thankfully
the existing absolute timeout API provides the tools we need (though
it does require that CONFIG_TIMEOUT_64BIT be enabled).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>