This takes an entry point and not a thread as argument.
Rename to z_is_idle_thread_entry() to make this clearer.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The main and idle threads, and their associated stacks,
were being referenced in various parts of the kernel
with no central definition. Expose these in kernel_internal.h
and namespace with z_ appropriately.
The main and idle threads were being defined statically,
with another variable exposed to contain their pointer
value. This wastes a bit of memory and isn't accessible
to user threads anyway, just expose the actual thread
objects.
Redundance MAIN_STACK_SIZE and IDLE_STACK_SIZE defines
in init.c removed, just use the Kconfigs they derive
from.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
These are renamed to z_timestamp_main and z_timestamp_idle,
and now specified in kernel_internal.h.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface and
has been renamed z_arch_kernel_init().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.
z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().
References from test cases changed to k_is_in_isr().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Global variables related to timing information have been
renamed to be prefixed with z_arch, with naming arranged
in increasing order of specificity.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
An #endif and the brace terminating a compound statement were
transposed, causing compilation errors with the above-specified
combination of configuration options.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
The callback function has been ignored in z_timeout_init() since the
timer rework in fall 2018. Passing real handlers to it in code is
distracting when they will be overridden by whatever callback is
provided in z_add_timeout().
As this function is an internal API deprecation is not necessary.
Remove the parameter and change all call sites to drop the argument.
Signed-off-by: Peter A. Bigot <pab@pabigot.com>
Corrected the define of SMP_FALLBACK to prevent llvm warning.
llvm issues a warning as the behaviour of using defined(x) inside a
macro expansion is undefined (https://reviews.llvm.org/D15866).
Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
If an architecture declares support for IPI, we still want to use it
only when running in SMP mode.
(This also fixes a build failure on ARC, which declares
CONFIG_SCHED_IPI_SUPPORTED but doesn't actually implement
z_arch_sched_ipi() yet).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The timeout code has an optimization where it refuses to send a new
timeout to the driver unless it is sooner than one already scheduled.
This won't work on SMP, though, because the timeout value when
timeslicing is enabled depends on the current thread, and on SMP the
decision as to the next thread will not be made until later (when we
swap, or exit an interrupt).
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Now that we have a working IPI framework, there's no reason for the
default spin loop for the SMP idle thread. Just use the default
platform idle and send an IPI when a new thread is readied.
Long term, this can be optimized if necessary (e.g. only send the IPI
to idling CPUs, or check priorities, etc...), but for a 2-cpu system
this is a very reasonable default.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Our thread struct gets initialized piecewise in a bunch of locations
(this is sort of a design flaw). The is_idle field, which was
introduced to identify idle threads in SMP (where there can be more
than one), was correctly set for idle threads but was being left
uninitialized elsewhere, and in a tiny handful of cases was turning up
nonzero.
The case in pipes. was particularly vexsome, as that isn't a thread at
all but one of the "dummy" threads used for timeouts (another design
flaw IMHO).
Get this right everywhere.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
In uniprocessor mode, the kernel knows when a context switch "is
coming" because of the cache optimization and can use that to do
things like update time slice state. But on SMP the scheduler state
may be updated on the other CPU at any time, so we don't know that a
switch is going to happen until the last minute.
Expose reset_time_slice() as a public function and call it when needed
out of z_swap().
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The loop in thread abort on SMP where we wait for the results on an
IPI correctly handled the case where a thread running on another CPU
gets its interrupt and self-aborts, but it missed the case where the
other thread pends before receiving the interrupt.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
There were two related bugs when in SMP mode:
1. Underneath z_reschedule(), the code was inexplicably checking the
swap_ok flag on the current CPU to see if it was OK to preempt the
current thread, but reschedule is the DEFINITION of a schedule
point and we always want to swap, even if the current thread is
non-preemptible.
2. With similar symptoms: in k_yield() a previous fix correct the
queue handling for SMP, but it missed the case where a thread of
the SAME priority as _current was on the queue and would fail to
swap. Yielding must always add the current thread to the back of
the current priority.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
z_spin_lock_valid() reads shared variable twice to do two checkings. If
this variable is modified by other CPU between two read accesses, the
checking value is inconsistent. This inconsistency causes the error
that CPU0 can pass the checking when it doesn't hold spinlock because
zeroed-out thread_cpu value is ambiguous with the CPU0 ID.
Fix the inconsistency by only reading shared variable once and using
local variable value to do two checkings.
Fixes#19299.
Signed-off-by: Jim Shu <cwshu@andestech.com>
The boot time measurement sample was giving bogus values on x86: an
assumption was made that the system timer is in sync with the CPU TSC,
which is not the case on most x86 boards.
Boot time measurements are no longer permitted unless the timer source
is the local APIC. To avoid issues of TSC scaling, the startup datum
has been forced to 0, which is in line with the ARM implementation
(which is the only other platform which supports this feature).
Cleanups along the way:
As the datum is now assumed zero, some variables are removed and
calculations simplified. The global variables involved in boot time
measurements are moved to the kernel.h header rather than being
redeclared in every place they are referenced. Since none of the
measurements actually use 64-bit precision, the samples are reduced
to 32-bit quantities.
In addition, this feature has been enabled in long mode.
Fixes: #19144
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Initial thread creation and tracing information
occurs with empty thread names. For better tracing information,
we need to a way to get actual thread names if they are set
in order to better track thread names and their IDs.
Signed-off-by: Nicholas Lowell <nlowell@lexmark.com>
It was reported in the code coverage report that Z_SYSCALL_HANDLER() was
not called by other code, if we run "sanitycheck -p qemu_x86 --coverage
-T tests/kernel/device/".
The root cause is that we include "errno.h", which includes
"include/generated/syscalls/device.h". It causes that the
declare of device_get_binding() in "include/generated/syscalls/device.h"
is marked as "has been called", rather than Z_SYSCALL_HANDLER()
in device.c.
So I remove "#include <errno.h>", which is useless in device.c. Also,
"#include <sys/util.h>" is removed for the same reason.
Signed-off-by: Steven Wang <steven.l.wang@linux.intel.com>
The semi-automated API changes weren't checkpatch aware. Fix up
whitespace warnings that snuck into the previous patches. Really this
should be squashed, but that's somewhat difficult given the structure
of the series.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These calls are not accessible in CI test, nor do they get built on
common platforms (in at least one case I found a typo which proved the
code was truly unused). These changes are blind, so live in a
separate commit. But the nature of the port is mechanical, all other
syscalls in the system work fine, and any errors should be easily
corrected.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These calls are buildable on common sanitycheck platforms, but are not
invoked at runtime in any tests accessible to CI. The changes are
mostly mechanical, so the risk is low, but this commit is separated
from the main API change to allow for more careful review.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
System call arguments, at the arch layer, are single words. So
passing wider values requires splitting them into two registers at
call time. This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.
Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths. So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.
Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types. So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*(). The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function. It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.
This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs. Future commits will port the less testable code.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
1) Dump time sinse last scheduler call
Could be handy for tickless kernel debug.
Will indicate that no rtc irq is called
2) Dump current timeout of each thread
Could be used to find yout when thread will wake up
3) Dump human friendly thread state
4) Use shell_prin instead shell_fprintf
Signed-off-by: Pavlo Hamov <pavlo_hamov@jabil.com>
The current implementation does not return the low 32 bits of
k_uptime_get() as suggested by it's documentation; it returns the number
of milliseconds represented by the low 32-bits of the underlying system
clock. The truncation before translation results in discontinuities at
every point where the system clock increments bit 33.
Reimplement it using the full-precision value, and update the
documentation to note that this variant has little value for
long-running applications.
Closes#18739.
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
The reason we decide to ignore it in code coverage:
1.No test case can cover the function for code coverage.
2.Even if we added a test for testing, it would be marked as
"never be called by other code" because the function cause
CPU halted and it can't return.
Signed-off-by: Peng Su <peng.su@intel.com>
The mutex locking was written to use k_sched_lock(), which doesn't
work as a synchronization primitive if there is another CPU running
(it prevents the current CPU from preempting the thread, it says
nothing about what the others are doing).
Use the pre-existing spinlock for all synchronization. One wrinkle is
that the priority code was needing to call z_thread_priority_set(),
which is a rescheduling call that cannot be called with a lock held.
So that got split out with a low level utility that can update the
schedule state but allow the caller to defer yielding until later.
Fixes#17584
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This needs further design work due to problems with logging
C strings. Just send always to printk() for now until this
is resolved.
Fixes: #18052
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The scheduler lock is a nestable lock. Unlocking a nested,
still-having, lock shouldn't preempt the current thread.
k_sched_lock();
k_sched_lock();
k_sched_unlock(); /* <--- this shouldn't be a scheduling point */
k_sched_unlock(); /* <--- this is a scheduling point */
This commit changes the preempt_ok argument from 1 to 0. This let
should_preempt() check whether it should preempt at the point or not.
This fixes#17869.
Signed-off-by: Yasushi SHOJI <y-shoji@ispace-inc.com>
The current API was assuming too much, in that it expected that
arch-specific memory domain configuration is only maintained
in some global area, and updates to domains that are not currently
active have no effect.
This was true when all memory domain state was tracked in page
tables or MPU registers, but no longer works when arch-specific
memory management information is stored in thread-specific areas.
This is needed for: #13441#13074#15135
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Some options like stack canaries use more stack space,
and on x86 this is not quite enough for ztest's main
thread stack to be 512 bytes.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Populate thread->stack_obj earlier in the thread initialization
process such that it is set when z_new_thread() is called.
There was nothing specific about its position, or the rest of
the code in that CONFIG_USERSPACE block, so just move it all up..
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
With the upcoming riscv64 support, it is best to use "riscv" as the
subdirectory name and common symbols as riscv32 and riscv64 support
code is almost identical. Then later decide whether 32-bit or 64-bit
compilation is wanted.
Redirects for the web documentation are also included.
Then zephyrbot complained about this:
"
New files added that are not covered in CODEOWNERS:
dts/riscv/microsemi-miv.dtsi
dts/riscv/riscv32-fe310.dtsi
Please add one or more entries in the CODEOWNERS file to cover
those files
"
So I assigned them to those who created them. Feel free to readjust
as necessary.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The `next_timeout()` function used to call the `elapsed()` function
directly in the `MAX` macro call. This caused the `elapsed()` function
to be executed twice, with possible different results, if the system
clock incremented its value in a meantime.
As a result, the whole `MAX(0, to->dticks - elapsed()` expresion could
return an incorrect value of -1, which represents the K_FOREVER timeout.
This led to a stall in devices running tickless kernel (as observed on
nRF52840).
Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
Any fatal error will print "ZEPHYR FATAL ERROR" now, so
we don't have to maintain a set of strings in the
sanitycheck harness.py
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is now called z_arch_esf_t, conforming to our naming
convention.
This needs to remain a typedef due to how our offset generation
header mechanism works.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We introduce a new z_fatal_print() API and replace all
occurrences of exception handling code to use it.
This routes messages to the logging subsystem if enabled.
Otherwise, messages are sent to printk().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
* z_NanoFatalErrorHandler() is now moved to common kernel code
and renamed z_fatal_error(). Arches dump arch-specific info
before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
and renamed k_sys_fatal_error_handler(). It is now much simpler;
the default policy is simply to lock interrupts and halt the system.
If an implementation of this function returns, then the currently
running thread is aborted.
* New arch-specific APIs introduced:
- z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
zero slice_ticks when can't time slice so that next_timeout will
ignore slice_ticks of _current_cpu and system can stay low power
state longer time.
Fixes: #17368.
Signed-off-by: Wentong Wu <wentong.wu@intel.com>