Commit graph

3183 commits

Author SHA1 Message Date
Andrew Boie
0095ed5384 kernel: rename z_is_idle_thread()
This takes an entry point and not a thread as argument.
Rename to z_is_idle_thread_entry() to make this clearer.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
2c1fb971e0 kernel: rename __swap
This is part of the core kernel -> architecture API and
has been renamed to z_arch_swap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
fe031611fd kernel: rename main/idle thread/stacks
The main and idle threads, and their associated stacks,
were being referenced in various parts of the kernel
with no central definition. Expose these in kernel_internal.h
and namespace with z_ appropriately.

The main and idle threads were being defined statically,
with another variable exposed to contain their pointer
value. This wastes a bit of memory and isn't accessible
to user threads anyway, just expose the actual thread
objects.

Redundance MAIN_STACK_SIZE and IDLE_STACK_SIZE defines
in init.c removed, just use the Kconfigs they derive
from.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
e6654103ba kernel: rename boot time globals
These are renamed to z_timestamp_main and z_timestamp_idle,
and now specified in kernel_internal.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
f6fb634b89 kernel: rename kernel_arch_init()
This is part of the core kernel -> architecture interface and
has been renamed z_arch_kernel_init().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
4ad9f687df kernel: rename thread return value functions
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.

z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
845aa6d114 kernel: renamespace arch_nop()
This is part of the core kernel -> architecture interface
and has been renamed to z_arch_nop().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
e1ec59f9c2 kernel: renamespace z_is_in_isr()
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().

References from test cases changed to k_is_in_isr().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
61901ccb4c kernel: rename z_new_thread()
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
9e1dda8804 timing_info: rename globals
Global variables related to timing information have been
renamed to be prefixed with z_arch, with naming arranged
in increasing order of specificity.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Anas Nashif
0bf1f9a408 tracing: add missing end_call for k_mutex_unlock
k_mutex_unlock had no end_call tracing call.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-30 10:49:37 -04:00
Anas Nashif
4abbd54cd5 tracing: remove useless ifdefing for CONFIG_TRACING
Tracing functions are noop if CONFIG_TRACING is disabled.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-30 10:49:37 -04:00
Charles E. Youse
c0c4ba8516 kernel/idle.c: fix compilation failure (SMP && !SCHED_IPI_SUPPORTED)
An #endif and the brace terminating a compound statement were
transposed, causing compilation errors with the above-specified
combination of configuration options.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-28 17:32:33 -04:00
Peter A. Bigot
5639ea07f8 kernel: timeout: remove unused callback parameter from init function
The callback function has been ignored in z_timeout_init() since the
timer rework in fall 2018.  Passing real handlers to it in code is
distracting when they will be overridden by whatever callback is
provided in z_add_timeout().

As this function is an internal API deprecation is not necessary.
Remove the parameter and change all call sites to drop the argument.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-09-28 15:41:18 -04:00
Jan Van Winkel
677050c2af kernel/idle: Correct SMP_FALLBACK define
Corrected the define of SMP_FALLBACK to prevent llvm warning.

llvm issues a warning as the behaviour of using defined(x) inside a
macro expansion is undefined (https://reviews.llvm.org/D15866).

Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
2019-09-27 20:32:26 -04:00
Wayne Ren
76a3235ad2 kernel: fix the bug in atomic_c.c
* USERSPACE -> CONFIG_USERSPACE
* fix the wrong paramter type

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2019-09-26 21:13:20 -04:00
Andy Ross
d82f76a0bb kernel/sched: Don't make an IPI if we don't need it
If an architecture declares support for IPI, we still want to use it
only when running in SMP mode.

(This also fixes a build failure on ARC, which declares
CONFIG_SCHED_IPI_SUPPORTED but doesn't actually implement
z_arch_sched_ipi() yet).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
6a153efc1b kernel/timeout: Fix timeslicing edge case in SMP
The timeout code has an optimization where it refuses to send a new
timeout to the driver unless it is sooner than one already scheduled.
This won't work on SMP, though, because the timeout value when
timeslicing is enabled depends on the current thread, and on SMP the
decision as to the next thread will not be made until later (when we
swap, or exit an interrupt).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
11bd67db53 kernel/idle: Use normal idle in SMP when IPI is available
Now that we have a working IPI framework, there's no reason for the
default spin loop for the SMP idle thread.  Just use the default
platform idle and send an IPI when a new thread is readied.

Long term, this can be optimized if necessary (e.g. only send the IPI
to idling CPUs, or check priorities, etc...), but for a 2-cpu system
this is a very reasonable default.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
6c283ca3d0 kernel/thread: Must always initialize is_idle field
Our thread struct gets initialized piecewise in a bunch of locations
(this is sort of a design flaw).  The is_idle field, which was
introduced to identify idle threads in SMP (where there can be more
than one), was correctly set for idle threads but was being left
uninitialized elsewhere, and in a tiny handful of cases was turning up
nonzero.

The case in pipes. was particularly vexsome, as that isn't a thread at
all but one of the "dummy" threads used for timeouts (another design
flaw IMHO).

Get this right everywhere.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
cb3964f04f kernel/sched: Reset time slice on swap in SMP
In uniprocessor mode, the kernel knows when a context switch "is
coming" because of the cache optimization and can use that to do
things like update time slice state.  But on SMP the scheduler state
may be updated on the other CPU at any time, so we don't know that a
switch is going to happen until the last minute.

Expose reset_time_slice() as a public function and call it when needed
out of z_swap().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
d442927667 kernel/sched: Add missing SMP thread abort case
The loop in thread abort on SMP where we wait for the results on an
IPI correctly handled the case where a thread running on another CPU
gets its interrupt and self-aborts, but it missed the case where the
other thread pends before receiving the interrupt.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross
b0158cc81f kernel/sched: Fix reschedule points in SMP
There were two related bugs when in SMP mode:

1. Underneath z_reschedule(), the code was inexplicably checking the
   swap_ok flag on the current CPU to see if it was OK to preempt the
   current thread, but reschedule is the DEFINITION of a schedule
   point and we always want to swap, even if the current thread is
   non-preemptible.

2. With similar symptoms: in k_yield() a previous fix correct the
   queue handling for SMP, but it missed the case where a thread of
   the SAME priority as _current was on the queue and would fail to
   swap.  Yielding must always add the current thread to the back of
   the current priority.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Jim Shu
e124670f0b kernel/spinlock: Fix a SMP race condition of SPIN_VALIDATE
z_spin_lock_valid() reads shared variable twice to do two checkings. If
this variable is modified by other CPU between two read accesses, the
checking value is inconsistent. This inconsistency causes the error
that CPU0 can pass the checking when it doesn't hold spinlock because
zeroed-out thread_cpu value is ambiguous with the CPU0 ID.

Fix the inconsistency by only reading shared variable once and using
local variable value to do two checkings.

Fixes #19299.

Signed-off-by: Jim Shu <cwshu@andestech.com>
2019-09-26 16:51:38 -04:00
Charles E. Youse
3036faf88a tests/benchmarks: fix BOOT_TIME_MEASUREMENT
The boot time measurement sample was giving bogus values on x86: an
assumption was made that the system timer is in sync with the CPU TSC,
which is not the case on most x86 boards.

Boot time measurements are no longer permitted unless the timer source
is the local APIC. To avoid issues of TSC scaling, the startup datum
has been forced to 0, which is in line with the ARM implementation
(which is the only other platform which supports this feature).

Cleanups along the way:

As the datum is now assumed zero, some variables are removed and
calculations simplified. The global variables involved in boot time
measurements are moved to the kernel.h header rather than being
redeclared in every place they are referenced. Since none of the
measurements actually use 64-bit precision, the samples are reduced
to 32-bit quantities.

In addition, this feature has been enabled in long mode.

Fixes: #19144

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-21 16:43:26 -07:00
Nicholas Lowell
5b322d9331 debug: tracing: add sys_trace_thread_name_set
Initial thread creation and tracing information
occurs with empty thread names.  For better tracing information,
we need to a way to get actual thread names if they are set
in order to better track thread names and their IDs.

Signed-off-by: Nicholas Lowell <nlowell@lexmark.com>
2019-09-19 00:37:35 -04:00
Steven Wang
2b2fa660b0 [Code coverage]: Fix the issue of function code coverage in device.c.
It was reported in the code coverage report that Z_SYSCALL_HANDLER() was
not called by other code, if we run "sanitycheck -p qemu_x86 --coverage
-T tests/kernel/device/".

The root cause is that we include "errno.h", which includes
"include/generated/syscalls/device.h". It causes that the
declare of device_get_binding() in "include/generated/syscalls/device.h"
is marked as "has been called", rather than Z_SYSCALL_HANDLER()
in device.c.

So I remove "#include <errno.h>", which is useless in device.c. Also,
"#include <sys/util.h>" is removed for the same reason.

Signed-off-by: Steven Wang <steven.l.wang@linux.intel.com>
2019-09-17 12:35:30 +08:00
Andrew Boie
a470ba1999 kernel: remove z_fatal_print()
Use LOG_ERR instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-12 05:17:39 -04:00
Andy Ross
643701aaf8 kernel: syscalls: Whitespace fixups
The semi-automated API changes weren't checkpatch aware.  Fix up
whitespace warnings that snuck into the previous patches.  Really this
should be squashed, but that's somewhat difficult given the structure
of the series.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross
075c94f6e2 kernel: Port remaining syscalls to new API
These calls are not accessible in CI test, nor do they get built on
common platforms (in at least one case I found a typo which proved the
code was truly unused).  These changes are blind, so live in a
separate commit.  But the nature of the port is mechanical, all other
syscalls in the system work fine, and any errors should be easily
corrected.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross
346cce31d8 kernel: Port remaining buildable syscalls to new API
These calls are buildable on common sanitycheck platforms, but are not
invoked at runtime in any tests accessible to CI.  The changes are
mostly mechanical, so the risk is low, but this commit is separated
from the main API change to allow for more careful review.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross
6564974bae userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words.  So
passing wider values requires splitting them into two registers at
call time.  This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.

Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths.  So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.

Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types.  So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*().  The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function.  It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.

This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs.  Future commits will port the less testable code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Pavlo Hamov
8076c8095b subsystem: kernel_shell: extend thread info
1) Dump time sinse last scheduler call
Could be handy for tickless kernel debug.
Will indicate that no rtc irq is called

2) Dump current timeout of each thread
Could be used to find yout when thread will wake up

3) Dump human friendly thread state

4) Use shell_prin instead shell_fprintf

Signed-off-by: Pavlo Hamov <pavlo_hamov@jabil.com>
2019-09-08 12:39:58 +02:00
Andrew Boie
90e6536053 kernel: fix default z_arch_cpu_halt()
k_cpu_idle() re-enables interrupts. Just spin
instead.

Fixes: #18973

Signed-off-by: Andrew Boie <andrewboie@gmail.com>
2019-09-07 09:57:40 -04:00
Peter Bigot
a6067a38f8 kernel: reimplement k_uptime_get_32()
The current implementation does not return the low 32 bits of
k_uptime_get() as suggested by it's documentation; it returns the number
of milliseconds represented by the low 32-bits of the underlying system
clock.  The truncation before translation results in discontinuities at
every point where the system clock increments bit 33.

Reimplement it using the full-precision value, and update the
documentation to note that this variant has little value for
long-running applications.

Closes #18739.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2019-09-03 22:50:41 +02:00
Peng Su
1084f48259 kernel: ignore z_fatal_halt() from code coverage
The reason we decide to ignore it in code coverage:
1.No test case can cover the function for code coverage.
2.Even if we added a test for testing, it would be marked as
  "never be called by other code" because the function cause
  CPU halted and it can't return.

Signed-off-by: Peng Su <peng.su@intel.com>
2019-08-24 23:40:22 +02:00
Andy Ross
6f13980fc7 kernel/mutex: Fix locking to be SMP-safe
The mutex locking was written to use k_sched_lock(), which doesn't
work as a synchronization primitive if there is another CPU running
(it prevents the current CPU from preempting the thread, it says
nothing about what the others are doing).

Use the pre-existing spinlock for all synchronization.  One wrinkle is
that the priority code was needing to call z_thread_priority_set(),
which is a rescheduling call that cannot be called with a lock held.
So that got split out with a low level utility that can update the
schedule state but allow the caller to defer yielding until later.

Fixes #17584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-22 17:58:16 -04:00
Andrew Boie
b6d961b7d4 kernel: remove log system support for fatal msgs
This needs further design work due to problems with logging
C strings. Just send always to printk() for now until this
is resolved.

Fixes: #18052

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-07 10:14:12 -07:00
Andrew Boie
00bf76eaa7 kernel: add z_fatal_halt() to interface
Intended to be called from application-level implementations
of k_sys_fatal_error_handler().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-06 19:32:22 -07:00
Yasushi SHOJI
20d072465d kernel: sched: Do not force preempt when k_sched_unlock()
The scheduler lock is a nestable lock.  Unlocking a nested,
still-having, lock shouldn't preempt the current thread.

	k_sched_lock();
	k_sched_lock();
	k_sched_unlock();  /* <--- this shouldn't be a scheduling point */
	k_sched_unlock();  /* <--- this is a scheduling point */

This commit changes the preempt_ok argument from 1 to 0.  This let
should_preempt() check whether it should preempt at the point or not.

This fixes #17869.

Signed-off-by: Yasushi SHOJI <y-shoji@ispace-inc.com>
2019-08-06 10:19:50 +02:00
Andrew Boie
8915e41b7b userspace: adjust arch memory domain interface
The current API was assuming too much, in that it expected that
arch-specific memory domain configuration is only maintained
in some global area, and updates to domains that are not currently
active have no effect.

This was true when all memory domain state was tracked in page
tables or MPU registers, but no longer works when arch-specific
memory management information is stored in thread-specific areas.

This is needed for: #13441 #13074 #15135

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
7fae2bbc18 tests: increase main stack size for x86 with ztest
Some options like stack canaries use more stack space,
and on x86 this is not quite enough for ztest's main
thread stack to be 512 bytes.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
f281b74c56 userspace: set stack object earlier
Populate thread->stack_obj earlier in the thread initialization
process such that it is set when z_new_thread() is called.

There was nothing specific about its position, or the rest of
the code in that CONFIG_USERSPACE block, so just move it all up..

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Nicolas Pitre
1f4b5ddd0f riscv32: rename to riscv
With the upcoming riscv64 support, it is best to use "riscv" as the
subdirectory name and common symbols as riscv32 and riscv64 support
code is almost identical. Then later decide whether 32-bit or 64-bit
compilation is wanted.

Redirects for the web documentation are also included.

Then zephyrbot complained about this:

"
New files added that are not covered in CODEOWNERS:

dts/riscv/microsemi-miv.dtsi
dts/riscv/riscv32-fe310.dtsi

Please add one or more entries in the CODEOWNERS file to cover
those files
"

So I assigned them to those who created them. Feel free to readjust
as necessary.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-08-02 13:54:48 -07:00
Robert Lubos
e9cdcc235f kernel: timeout: Fix macro usage in next_timeout function
The `next_timeout()` function used to call the `elapsed()` function
directly in the `MAX` macro call. This caused the `elapsed()` function
to be executed twice, with possible different results, if the system
clock incremented its value in a meantime.

As a result, the whole `MAX(0, to->dticks - elapsed()` expresion could
return an incorrect value of -1, which represents the K_FOREVER timeout.
This led to a stall in devices running tickless kernel (as observed on
nRF52840).

Signed-off-by: Robert Lubos <robert.lubos@nordicsemi.no>
2019-08-01 12:28:44 +02:00
Andrew Boie
81ef42d2bc sanitycheck: simplify fault detection
Any fatal error will print "ZEPHYR FATAL ERROR" now, so
we don't have to maintain a set of strings in the
sanitycheck harness.py

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
96571a8c40 kernel: rename NANO_ESF
This is now called z_arch_esf_t, conforming to our naming
convention.

This needs to remain a typedef due to how our offset generation
header mechanism works.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
8a9e8e0cd7 kernel: support log system for fatal errors
We introduce a new z_fatal_print() API and replace all
occurrences of exception handling code to use it.
This routes messages to the logging subsystem if enabled.
Otherwise, messages are sent to printk().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
71ce8ceb18 kernel: consolidate error handling code
* z_NanoFatalErrorHandler() is now moved to common kernel code
  and renamed z_fatal_error(). Arches dump arch-specific info
  before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
  and renamed k_sys_fatal_error_handler(). It is now much simpler;
  the default policy is simply to lock interrupts and halt the system.
  If an implementation of this function returns, then the currently
  running thread is aborted.
* New arch-specific APIs introduced:
  - z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
  namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Wentong Wu
2463ded4c8 kernel: timeout: do not active time slicing if idle thread ready
zero slice_ticks when can't time slice so that next_timeout will
ignore slice_ticks of _current_cpu and system can stay low power
state longer time.

Fixes: #17368.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2019-07-24 14:02:23 -07:00