Commit graph

2970 commits

Author SHA1 Message Date
Stephanos Ioannidis 360f810704 kernel: Migrate to K_KERNEL_PINNED_STACK_ARRAY_DECLARE
This commit updates all deprecated `K_KERNEL_PINNED_STACK_ARRAY_EXTERN`
macro usages to use the `K_KERNEL_PINNED_STACK_ARRAY_DECLARE` macro
instead.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-06-20 10:25:52 +02:00
Gerard Marull-Paretas afffc1006f kernel: remove redundant <zephyr/zephyr.h> includes
Files including <zephyr/kernel.h> do not have to include
<zephyr/zephyr.h>, a shim to <zephyr/kernel.h>.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-06-15 09:13:11 +02:00
David Palchak b4a7f0f2ca linker: ensure global constructors only run once
Rename the symbols used to denote the locations of the global
constructor lists and modify the Zephyr start-up code accordingly.
On POSIX systems this ensures that the native libc init code won't
find any constructors to run before Zephyr loads.

Fixes #39347, #36858

Signed-off-by: David Palchak <palchak@google.com>
2022-06-09 11:33:36 +02:00
Keith Packard 275b40ef25 kernel: Allow non-zephyr toolchains to advertise TLS support
Use a new environment variable,
ZEPHYR_TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE, to set the value for
TOOLCHAIN_SUPPORTS_THREAD_LOCAL_STORAGE instead of setting it to 'n' for
all non-Zephyr toolchains. In particular, the Debian arm-none-eabi
toolchain has TLS support and with this option, can be used to build
Zephyr with thread local variables.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-06-05 14:29:12 +02:00
Andy Ross fb613594c7 kernel/sched: Panic on aborting essential threads
Documentation specifies that aborting/terminating/exiting essential
threads is a system panic condition, but we didn't actually implement
that and allowed it as for other threads. At least one app wants to
exploit this documented behavior as a "watchdog" kind of condition,
and that seems reasonable.  Do what we say we're supposed to do.

This also includes a small fix to a test, which seemed like it was
written to exercise exactly this condition.  Except that it failed to
detect whether or not a system fatal error was actually signaled and
was (incorrectly) indicating "success".  Check that we actually enter
the handler.

Fixes #45545

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-20 12:34:30 +02:00
Peter Mitsis 976e4087ec kernel: Fix gathering of runtime thread stats
The function k_thread_runtime_stats_all_get() now populates the
current_cycles field in the thread runtime stats structure.

Resets the number of cycles in the CPU's current usage window once
the idle thread is scheduled.

Fixes the average_cycles calcuation.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-05-13 10:19:53 -05:00
Keith Packard 4fc00cae7a kernel: Allow Zephyr to use libc's internal errno
For a library which already provides a multi-thread aware errno, use
that instead of creating our own internal value.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-05-12 19:06:48 -04:00
Lucas Dietrich 9a848b3ad4 kernel: workq: Add internal function z_work_submit_to_queue()
This adds the internal function z_work_submit_to_queue(), which
submits the work item to the queue but doesn't force the thread to yield,
compared to the public function k_work_submit_to_queue().

When called from poll.c in the context of k_work_poll events, it ensures
that the thread does not yield in the context of the spinlock of object
that became available.

Fixes #45267

Signed-off-by: Lucas Dietrich <ld.adecy@gmail.com>
2022-05-10 18:39:51 +02:00
Gerard Marull-Paretas cffefc818d kernel: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all kernel code to the
new prefix <zephyr/...>. Note that the conversion has been scripted,
refer to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 09:26:20 +02:00
Jordan Yates 1ef647f396 kernel: add k_can_yield helper function
Implements a function that application and driver code can use to check
whether it is valid to yield (or block) in the current context. This
check is required for functions that can feasibly be run from multiple
contexts. The primary intended use case is power management transition
functions, which can be run by application code explicitly or
automatically in the idle thread by system PM.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2022-05-06 11:33:10 +02:00
Bradley Bolen 88ba97fea4 arch: arm: aarch32: cortex_a_r: Add shared FPU support
This adds lazy floating point context switching.  On svc/irq entrance,
the VFP is disabled and a pointer to the exception stack frame is saved
away.  If the esf pointer is still valid on exception exit, then no
other context used the VFP so the context is still valid and nothing
needs to be restored.  If the esf pointer is NULL on exception exit,
then some other context used the VFP and the floating point context is
restored from the esf.

The undefined instruction handler is responsible for saving away the
floating point context if needed.  If the handler is in the first
irq/svc context and the current thread uses the VFP, then the float
context needs to be saved.  Also, if the handler is in a nested context
and the previous context was using the FVP, save the float context.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Flavio Ceolin 551038e748 kernel: sched: Change cpu pin only for not executing threads
Do not allow changing the CPU which a thread is pinned when it is
already being executed. This allows further optimizations in some
platforms with incoherent memory since we can safely assume that the
thread will run in the same CPU and avoid invalidate / flush the
cache during context switches.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-05-04 13:46:48 -04:00
Andy Ross 7a59cebf12 kernel/k_timer: Robustify vs. late interrupts
The k_timer utility was written to assume that the kernel timeout
handler would never be delayed by more than a tick, so it can naively
reschedule the next interrupt with a simple delay.

Unfortunately real platforms have glitchy hardware and high tick
rates, and on intel_adsp we're seeing this promise being broken in
some circumstances.

It's probably not a good idea to try to plumb the timer driver
interface up into the IPC layer to do this correction, but thankfully
the existing absolute timeout API provides the tools we need (though
it does require that CONFIG_TIMEOUT_64BIT be enabled).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-04 09:55:46 -05:00
Andy Ross b4e9ef0691 kernel/sched: Defer IPI sending to schedule points
The original design intent with arch_sched_ipi() was that
interprocessor interrupts were fast and easily sent, so to reduce
latency the scheduler should notify other CPUs synchronously when
scheduler state changes.

This tends to result in "storms" of IPIs in some use cases, though.
For example, SOF will enumerate over all cores doing a k_sem_give() to
notify a worker thread pinned to each, each call causing a separate
IPI.  Add to that the fact that unlike x86's IO-APIC, the intel_adsp
architecture has targeted/non-broadcast IPIs that need to be repeated
for each core, and suddenly we have an O(N^2) scaling problem in the
number of CPUs.

Instead, batch the "pending" IPIs and send them only at known
scheduling points (end-of-interrupt and swap).  This semantically
matches the locations where application code will "expect" to see
other threads run, so arguably is a better choice anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-02 10:23:13 -05:00
Andy Ross 3267cd327e kernel/sched: Refactor IPI signaling
Minor cleanup, we had a bunch of duplicated #if logic to send IPIs,
put it all in one place.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-02 10:23:13 -05:00
Andy Ross 8d94967ec4 kernel/workq: Cleanup bespoke reschedule point
The work queue has a semi/non-standard reschedule point implemented
using k_yield(), with a check to see if the current thread is
preemptible.  Just call z_reschedule_unlocked(), it has this check
internally and is the intended API for this.

Really, this is only a half fix.  Ideally the schedule point and the
lock release should be atomic[1] via the more idiomatic
z_reschedule().  But that would take some surgery, so let's go with
the simpler cleanup first.

This also avoids having to duplicate logic that gets added to
reschedule points by an upcoming patch.

[1] So that they represent a condition variable and don't race at the
end. In this case the race is present but benign, since the only thing
we really want to know is that the queue thread gets a chance to run.
The only cost is an occasional duplicated/needless context switch if
two threads are racing on a submit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-02 10:23:13 -05:00
Peter Mitsis e9b288b713 kernel: mutex: remove unnecessary schedule locking
Removes an unnecessary schedule lock/unlock pair from k_mutex_unlock().

Rationale: Given that only the current thread (which would also be the
mutex owner) will be able to modify the mutex object AND that a
recursive unlock ought never trigger any reschedule (as it does not
touch the pend queue), then performing a schedule lock is not needed
prior to testing for a recursive unlock.

Furthermore, even if it is not a recursive unlock, then a schedule lock
is superfluous as the existing spinlock provides sufficient protection.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-04-20 21:03:59 -04:00
Peter Mitsis a30cf39975 kernel: update k_thread_state_str() API
When threads are in more than one state at a time, k_thread_state_str()
returns a string that lists each of its states delimited by a '+'.
This in turn necessitates a change to the API that includes both a
pointer to the buffer to use for the string and the size of the buffer.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-04-20 20:20:13 -04:00
Anas Nashif c9d0248867 kernel: introduce convinience apu to pin thread to a cpu
Add an API that clears cpu mask from a thread and sets it to a specific
CPU.

This is the equivelent of:

        k_thread_cpu_mask_clear(&thread);
	k_thread_cpu_mask_enable(&thread, cpu_idx);

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-04-19 13:05:09 -04:00
Flavio Ceolin d02a1e9879 pm: Only resize power domains
Instead of resizing all devices handles, we just resize devices that are
power domains. This means that a power domain has to be declared as
compatbile with "power-domain" in device tree node.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-04-18 17:25:01 -07:00
Flavio Ceolin 0b13b44a66 pm: device: Dynamically add a device to a power domain
Add API to add devices to a power domain in runtime. The number of
devices that can be added is defined in build time.

The script gen_handles.py will check the number defined in
`CONFIG_PM_DEVICE_POWER_DOMAIN_DYNAMIC` to resize the handles vector,
adding empty slots in the supported sector to be used later.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-04-18 17:25:01 -07:00
Andy Ross 0b2ed3818d kernel/timeout: Cleanup/speedup parallel announce logic
Commit b1182bf83b ("kernel/timeout: Serialize handler callbacks on
SMP") introduced an important fix to timeout handling on
multiprocessor systems, but it did it in a clumsy way by holding a
spinlock across the entire timeout process on all cores (everything
would have to spin until one core finished the list).  The lock also
delays any nested interrupts that might otherwise be delivered, which
breaks our nested_irq_offload case on xtensa+SMP (where contra x86,
the "synchronous" interrupt is sensitive to mask state).

Doing this right turns out not to be so hard: take the timeout lock,
check to see if someone is already iterating
(i.e. "announce_remaining" is non-zero), and if so just increment the
ticks to announce and exit.  The original cpu will then complete the
full timeout list without blocking any others longer than needed to
check the timeout state.

Fixes #44758

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-04-13 13:26:14 -07:00
Tomasz Bursztyka 6654d4bc00 kernel/device: Remove unknown external pointer
Some legacy pointer that remained.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-04-08 09:59:00 -04:00
Andy Ross b1182bf83b kernel/timeout: Serialize handler callbacks on SMP
On multiprocessor systems, it's routine to enter sys_clock_announce()
in parallel (the driver will generally announce zero ticks on all but
one cpu).

When that happens, each call will independently enter the loop over
the timeout list.  The access is correctly synchronized, so the list
handling is correct.  But the lock is RELEASED around the invocation
of the callback, which means that the individual callbacks may
interleave between cpus.  That means that individual
application-provided callbacks may be executed in parallel, which to
the app is indistinguishable from "out of order".

That's surprising and error-prone.  Don't do it.  Place a secondary
outer spinlock around the announce loop (but not the timeslicing
handling) to correctly serialize the timeout handling on a single cpu.

(It should be noted that this was discovered not because of a timeout
callback race, but because the resulting simultaneous calls to
sys_clock_set_timeout from separate cores seems to cause extremely
high latency excursions on intel_adsp hardware using the cavs_timer
driver.  That hardware issue is still poorly understood, but this fix
is desirable regardless.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-04-08 09:28:47 +02:00
Trond Einar Snekvik 6224ecbfa6 kernel: Remove idle thread cpu index on single-core devices
The idle thread got an index suffix in #23536 to make it easier to
identify different idle threads on different cores. This looks out of
place on single-core devices when the idle thread is listed next to
other kernel threads, such as main.

Remove the idle thread index on single-core platforms, and replace all
references to this format in tests and documentation.

Signed-off-by: Trond Einar Snekvik <Trond.Einar.Snekvik@nordicsemi.no>
2022-03-30 10:08:48 -04:00
Nicolas Pitre c9e3e0d956 sched: formalize the passing of NULL to z_get_next_switch_handle()
This is an attempt at formally distinguishing and supporting the case
described in 40795 where an architecture doesn't preserve/restore the
complete thread state upon entering/exiting interrupt exception state.

This is mainly about promoting the current behavior from the accepted
workaround to a formal API specification. This workaround is currently
used on ARM64 but RISC-V requires it too.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-18 13:32:49 -04:00
Nazar Kazakov f483b1bc4c everywhere: fix typos
Fix a lot of typos

Signed-off-by: Nazar Kazakov <nazar.kazakov.work@gmail.com>
2022-03-18 13:24:08 -04:00
Anas Nashif 460b37fbe1 kernel: SMP is Symmetric multiprocessing
Fix kconfig for SMP and use correct terminology for SMP.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-03-15 11:07:29 -04:00
Nazar Kazakov 9713f0d47c everywhere: fix typos
Fix a lot of typos

Signed-off-by: Nazar Kazakov <nazar.kazakov.work@gmail.com>
2022-03-14 20:22:24 -04:00
Nicolas Pitre 962b374129 userspace: plug thread index leak in k_object_alloc()
The thread index should be freed when the object allocation fails.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-03-14 19:18:34 -04:00
Gerard Marull-Paretas 5573d8d11e kernel: use DEVICE_DT_GET_OR_NULL for entropy device
A reference to the entropy device can be obtained at compile time, so
avoid using device_get_binding().

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-03-11 15:27:05 -08:00
Flavio Ceolin ac079fcba6 kernel: init: Simplify early rand get
There is an API to get an specific number of random bytes. There is
no need to re-implement this logic here.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-03-10 12:45:58 -05:00
Dominik Ermel 8db0e5a42e kernel: Reduce strncpy in z_impl_k_thread_name_set by null char
No point to copy terminating NULL character to just overwrite it in
next line.

Signed-off-by: Dominik Ermel <dominik.ermel@nordicsemi.no>
2022-03-10 10:12:48 +01:00
Andy Ross 3e696896bf kernel: Add "per thread" timeslice mechanism
Zephyr's timeslice implementation has always been somewhat primitive.
You get a global timeslice that applies broadly to the whole bottom of
the priority space, with no ability (beyond that one priority
threshold) to tune it to work on certain threads, etc...

This adds an (optionally configurable) API that allows timeslicing to
be controlled on a per-thread basis: any thread at any priority can be
set to timeslice, for a configurable per-thread slice time, and at the
end of its slice a callback can be provided that can take action.
This allows the application to implement things like responsiveness
heuristics, "fair" scheduling algorithms, etc... without requiring
that facility in the core kernel.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-03-09 13:49:44 -05:00
Gerard Marull-Paretas 95fb0ded6b kconfig: remove Enable from boolean prompts
According to Kconfig guidelines, boolean prompts must not start with
"Enable...". The following command has been used to automate the changes
in this patch:

sed -i "s/bool \"[Ee]nables\? \(\w\)/bool \"\U\1/g" **/Kconfig*

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-03-09 15:35:54 +01:00
Andy Ross 2b210cb3db kernel: Refactor SMP cpu initialization a bit
Things had gotten a little tangled in there so let's do some cleanup.

Remove the distressingly-special-purpose z_reinit_idle_thread() hook
(which existed to support secondary core bringup when
SMP_BOOT_DELAY=y), and just fold that into a generic z_init_cpu(),
which we can call in obvious and symmetric ways from main
initialization, z_smp_init(), and z_smp_start_cpu() (the now-official
programmatic hook for starting cpus).

Remove the "#if CONFIG_MP_NUM_CPUS > 1" exclusions.  These weren't
saving any code size and were propagating themselves into platform
layers trying to avoid build failures.

There are some "special" APIs added for SOF which need to go away in
favor of the newer/generic z_smp_start_cpu().  Collect them in one
place and put them under a "#ifdef CONFIG_SOF" to prevent them from
being used in Zephyr apps.

Move some function declarations that didn't have homes into
<kernel/thread.h>.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-03-01 09:59:15 -05:00
Daniel Leung b46916484a kernel: mmu: add a log line for z_phys_unmap
This adds a LOG_DBG() line for z_phys_unmap which mirrors
what is in z_phys_map(). This also fixes a warning from
Clang about a variable being set but never used (addr_offset).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-02-24 08:38:38 -06:00
Nicolas Pitre 3617398f1d kernel/init.c: missing memset() to early_memset() conversion
Commit 678b76e4b0 ("kernel/init.c: allow for memset/memcpy
alternatives during early boot") and commit da28829b64 ("kernel:
zero the bss section of OCM memory at boot time") were created
independently and missed changes from each other.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-22 09:05:40 -05:00
Andy Ross 73453a39d1 arch: Add IRQ_OFFSET_NESTED feature
The x86 and xtensa implementations of irq_offload() invoke synchronous
interrupts on the local CPU, and are therefore safe to use from within
an interrupt context.  This is a cheap and portable way to exercise
nested interrupts, which are otherwise highly platform-dependent to
test.  Add a kconfig to signal the capability.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-02-21 22:10:03 -05:00
Nicolas Pitre 678b76e4b0 kernel/init.c: allow for memset/memcpy alternatives during early boot
Zeroing the BSS and copying data to RAM with regular memset/memcpy may
cause problems when those functions are assuming a fully initialized
system for their optimizations to work e.g. some instructions require
an active MMU, but turning the MMU on needs the .bss section to be
cleared first, etc.

Commit c5b898743a ("aarch64: Fix alignment fault on z_bss_zero()")
provides a detailed explanation of such a case.

Replacing z_bss_zero() with an architecture specific one is problematic
as the former may see new sections added to it that would be missed by
the later. The same reasoning goes for z_data_copy().

Let's make maintenance much easier by providing weak versions of
memset/memcpy that can be overridden by architecture-specific safe
versions when needed.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-02-21 21:00:12 -05:00
Krzysztof Chruscinski 1da97e1374 kernel: Add function for calculating stack usage
Extracting stack usage calculation from k_thread_stack_space_get to
z_stack_space_get so it can be used also for interrupt stack.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2022-02-21 20:57:17 -05:00
Immo Birnbaum da28829b64 kernel: zero the bss section of OCM memory at boot time
If a chosen entry exists for a memory area of type OCM, zero the OCM
memory's bss section at boot-time.

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2022-02-21 20:53:44 -05:00
Flavio Ceolin 47b7c2e931 kernel: Fix timeout issue with SYSTEM_CLOCK_SLOPPY_IDLE
We can't simply use CLAMP to set the next timeout because
when CONFIG_SYSTEM_CLOCK_SLOPPY_IDLE is set, MAX_WAIT is
a negative number and then CLAMP will be called with
the higher boundary lower the lower boundary.

Fixes #41422

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-02-15 13:05:04 -05:00
Ederson de Souza 75fd452bc0 kernel/init.c: Initialise logging subsystem after arch
So that logging and "satellite" subsystems, such as tracing and object
tracking can count on kernel structs being properly initialised, such
as `_current_cpu`.

Fixes #42061.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-01-26 10:09:19 -05:00
Peter Mitsis 11f8f6697f kernel: Update CPU runtime stats of non-idle time
Updates sched_cpu_update_usage() such that the CPU runtime stats
only update the its non-idle time when the current thread is not the
idle thread. This is necessary as otherwise the CPUs idle-time will
be double counted in k_thread_runtime_stats.execution_cycles.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-20 08:22:01 -05:00
Flavio Ceolin bb5d644158 pm: Do not suspend during kernel initialization
Check if the kernel has fully initialized before take any action that
may suspend the system.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-01-19 14:14:28 -05:00
Daniel Leung 7c6b016ce7 kernel: introduce hidden CONFIG_KERNEL_VM_SUPPORT
Introduce a hidden kconfig CONFIG_KERNEL_VM_SUPPORT which
enables some kconfigs that are required for virtual memory
support. CONFIG_KERNEL_VM_BASE, CONFIG_KERNEL_VM_OFFSET,
and CONFIG_KERNEL_VM_SIZE are moved under this new kconfig
so these can be enabled independent of CONFIG_MMU.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-18 19:18:30 -05:00
Daniel Leung 2e5501a3fe kernel: move CONFIG_MMU into kernel Kconfig
This moves CONFIG_MMU and its children from arch/Kconfig into
kernel/Kconfig. These are used to enable kernel support of MMU
so they should be under kernel/.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-18 19:18:30 -05:00
Daniel Leung 88cfd3343d kernel: arch: no need for #ifdef MMU in header
There is no need to use conditional compilation for the function
prototypes in the kernel architecture header file. So remove it.
Added bouns is that these functions can appear in documentation
without explicitly enabled in pre-defines during doc build.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-18 19:18:30 -05:00
Krzysztof Chruscinski 50c7c7b1e4 sys: time_units: Add Kconfig option for algorithm selection
Add maximum timeout used for conversion to Kconfig. Option is used
to determine which conversion algorithm to use: faster but overflowing
earlier or slower without early overflow.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2022-01-18 13:11:52 -05:00
Enjia Mai 38f00bd438 kernel: smp: change back the order of SMP thread and timer initialization
Commit 3457118 changed the sequence of z_smp_thread_init() and
smp_timer_init() in smp_init_top() subroutine, which initializes
other BPs. In some boards (up_squared, acrn_ehl_crb) it will fail to
work while SMP enabling. If the timer interrupt is enabled before the
first thread was initialized. Now change back to its original order.

Fixes #41835

Signed-off-by: Enjia Mai <enjia.mai@intel.com>
2022-01-18 12:59:50 -05:00
Daniel Leung d3b030be9b kernel: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Andy Ross 3457118540 kernel/smp: Fix races in SMP initialization
This has bitrotten a bit.  Early implementations had a synchronous
arch_start_cpu(), but then we started allowing that to be an async
operation.  But that means that CPU start now becomes surprisingly
reentrant to the arch layer (cpu 0 can get a call to start cpu 2 while
cpu 1's initialization code is still running).  That's just error
prone; we never documented the requirements cleanly (the window is
very small, but not so small to a slow simulator!).

Add an extra flag so we don't issue the next start until the last is
out of the arch layer and running in smp_init_top().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-01-11 11:53:53 +01:00
Peter Mitsis d1353a4584 kernel: pipes: add pipe flush routines
Adds two routines to flush pipe objects:
   k_pipe_flush()
     - This routine flushes the entire pipe. That includes both
     the pipe's buffer and all pended writers. It is equivalent
     to reading everything into a giant temporary buffer which
     is then discarded.
   k_pipe_buffer_flush()
     - This routine flushes only the pipe's buffer (if it exists).
     It is equivalent to reading a maximum of "buffer size" bytes
     into a temporary buffer which is then discarded.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis da7fbad057 kernel: pipes: fix race condition
Fixes a race condition in the k_pipe_cleanup() routine by adding
a spinlock. Additionally, internal counters are now reset after
freeing the buffer as the pipe has now become a bufferless pipe.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis 7aa874a251 kernel: pipes: refactor
Minor refactoring done to the pipes codebase to remove unnecessary
wrapper functions.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis 434cd1505e kernel: remove NUM_PIPE_ASYNC_MSGS
Removes references to NUM_PIPE_ASYNC_MSGS as asynchronous pipe support
is not supported.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis ecd3164671 kernel: pipes: fix build warnings
Resolves void pointer arithmetic build warnings in k_pipe_put() by
casting the pointer to a uint8_t pointer.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 12:17:14 -05:00
Peter Mitsis 0ebd6c7f26 kernel/sched: enable/disable runtime stats
Adds support to enable/disable both thread and cpu runtime
stats.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis 4eb1dd02cc kernel: extend CPU runtime stats
Extends the CPU usage runtime stats to track current, total, peak
and average usage (as bounded by the scheduling of the idle thread).
This permits a developer to obtain more system information if desired
to tune the system.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis 572f1db56a kernel: extend thread runtime stats
When the new Kconfig option CONFIG_SCHED_THREAD_USAGE_ANALYSIS
is enabled, additional timing stats are collected during context
switches. This extra information allows a developer to obtain the
the current, longest, average and total lengths of the time that
a thread has been scheduled to execute.

A developer can in turn use this information to tune their app and/or
alter their scheduling policies.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis 5deaffb2ee kernel: update z_sched_thread_usage()
This commit does two things to the z_sched_thread_usage(). First,
it updates the API so that it accepts a pointer to the runtime
stats instead of simply returning the usage cycles. This gives it
the flexibility to retrieve additional statistics in the future.

Second, the runtime stats are only updated if the specified thread
is the current thread running on the current core.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis 82c3d531a6 kernel: move thread usage routines to own file
Moves the CONFIG_SCHED_THREAD_USAGE block of code out of sched.c
into its own file. Not only do they employ their own private
spin lock, but it is expected that additional usage routines will be
added in the future.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-10 10:38:06 -05:00
Peter Mitsis 48f516469a kernel: Fix typo in macro name
Fixes a typo in the macro ARCH_DYMANIC_OBJ_K_THREAD_ALIGNMENT
so that DYMANIC becomes DYNAMIC.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2022-01-07 11:20:46 -05:00
Gerard Marull-Paretas 0b0435a96c device: deprecate (z_)device_usable_check
The functionality provided by device_usable_check is already provided by
device_is_ready. The (z_)device_usable_check APIs have been
re-implemented using the (z_)device_is_ready APIs and have been marked
as deprecated.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-01-07 10:41:23 -05:00
Gerard Marull-Paretas 47bfb6fb4c device: implement device_is_ready as syscall
Instead of using device_usable_check() syscall, implement a new syscall
for device_is_ready that uses z_device_is_ready underneath.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-01-07 10:41:23 -05:00
Gerard Marull-Paretas b2b8fdf539 device: s/z_device_ready/z_device_is_ready
Rename z_device_ready to z_device_is_ready. Function name suggests a
boolean result this way, in line with other functions (e.g.
device_is_ready).

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-01-07 10:41:23 -05:00
Berend Ozceri b208e5811e kernel/swap: Initialize dummy thread's resource pool
The resource pool of the short-lived dummy thread "stub" may be
inherited by other threads created during system initialization. This
commit initializes this resource pool to NULL or the system pool to
ensure that a well-defined resource pool propagates to other threads
that inherit it from the dummy thread.

Fixes #41482.

Signed-off-by: Berend Ozceri <berend@recogni.com>
2022-01-06 11:57:18 -05:00
Jeremy Bettis fb1c36f7fd build: hide z_priq_mq_add/z_priq_mq_remove
Move z_priq_mq_add and z_priq_mq_remove into #ifdef CONFIG_SCHED_MULTIQ
block, because they are only used with that config.

Signed-off-by: Jeremy Bettis <jbettis@google.com>
2022-01-04 11:52:10 -05:00
Daniel Leung b6dd960be8 kernel: userspace: fix dynamic kernel object alignment
Previous commit 55350a93e9 fixing
address-of-packed-mem warnings uncovered an issue with
the alignment of dynamic kernel objects. On 64-bit platforms,
the alignment is 16 bytes instead of 4/8 bytes (as in pointer,
void *). This changes the function of mapping between kernel
object types and alignments to use the dynamic object struct
as basis for alignment instead of simply using pointers.

This also uncomments the assertion added in the previous commit
55350a93e9 so that we can keep
an eye on the alignment in the future. Note that the assertion
is moved after checking if the incoming kernel object is
dynamically allocated. Static kernel objects are not subjected
to this alignment requirement.

Fixes #41062

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-12-20 12:48:58 -05:00
Peter Mitsis f8b76f3b03 kernel: add 'static' keyword to select routines
Applies the 'static' keyword to the following inlined routines:
    z_priq_dumb_add()
    z_priq_mq_add()
    z_priq_mq_remove()
As those routines are only used in one place, they no longer have
externally visible declarations.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2021-12-13 17:21:58 -05:00
Lixin Guo d4826d874e kernel: work: remove unused if statement
Condition of work == NULL is checked before, so there is no need to
check it again.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
2021-12-13 17:20:56 -05:00
Jeremy Bettis 1e0a36c655 build: Remove unused functions
Removed unused functions, or moved inside #ifdefs.

This allows using -Werror=unused-function on the clang compiler. Tested
by building the ChromeOS EC on all supported platforms with
-Werror=unused-functions.

Signed-off-by: Jeremy Bettis <jbettis@google.com>
2021-12-13 15:49:08 -05:00
Carles Cufi 55350a93e9 kernel: userspace: Fix address-of-packed-mem warning
The warning below appears once -Waddress-of-packed-mem is enabled:

/home/carles/src/zephyr/zephyr/kernel/userspace.c: In function
'unref_check':
/home/carles/src/zephyr/zephyr/kernel/userspace.c:471:28: warning:
converting a packed 'struct z_object' pointer (alignment 4) to a 'struct
dyn_obj' pointer (alignment 16) may result in an unaligned pointer value
[-Waddress-of-packed-mem
ber]
  471 |    CONTAINER_OF(ko, struct dyn_obj, kobj);

To avoid the warning, use an intermediate void * variable.

More info in #16587.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-12-10 14:08:59 +01:00
Carles Cufi 2000cdf6dc kernel: mmu: Fix access to unpacked member inside packed struct
The following warning is triggered by GCC when
-Waddress-of-packed-member is enabled:

/home/carles/src/zephyr/zephyr/kernel/mmu.c: In function
'free_page_frame_list_put':
/home/carles/src/zephyr/zephyr/kernel/mmu.c:383:42: warning: taking
address of packed member of 'struct z_page_frame' may result in an
unaligned pointer value [-Waddress-of-packed-member]
  383 |  sys_slist_append(&free_page_frame_list, &pf->node);

This is due to the fact that sys_snode_t node is an unpacked structure
inside a packed z_page_frame structure, so that the alignment of the
former cannot be ensured if placed inside the latter.

Given that alignment of z_page_frame is ensured by the code, silence the
compiler by going through an intermediate variable.

More info in #16587.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-12-10 14:08:59 +01:00
Daniel Leung 650a629b08 debug: gdbstub: remove start argument from z_gdb_main_loop()
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Flavio Ceolin 623ed5ae29 pm: Remove invalid comments
Remove comments referencing an old function / behavior.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-29 19:37:55 -05:00
Flavio Ceolin 72262475f4 pm: idle: Remove not necessary branch
Remove a constant branch calling directly pm_save_idle.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-29 19:37:55 -05:00
NingX Zhao 5287ce8ff0 poll: modify the function z_vrfy_k_poll
Removing the 'U' to avoid the type of num_events changed.
And make sure it is meaningful Z_SYSCALL_VERIFY micro.

Fixed #40614

Signed-off-by: NingX Zhao <ningx.zhao@intel.com>
2021-11-25 18:23:51 -05:00
Daniel Leung fff9180614 kernel: mmu: make virtual region bitmap static
The virtual region bitmap bitarray struct is only used within
the source file, so it can be declared static.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-24 14:22:23 -05:00
Jordan Yates f515aa0cf0 device: supported devices visitor API
Adds an API to query and visit supported devices. Follows the example
set by the required devices API.

Implements #37793.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2021-11-23 12:17:14 +01:00
Daniel Leung a60317167b kernel: mem_domain: k_mem_domain_add_thread to return errors
This updates k_mem_domain_add_thread() to return errors so
the application has a chance to recover.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Daniel Leung bb595a85f1 kernel: mem_domain: add/remove partition funcs to return errors
This changes both k_mem_domain_add_partition() and
k_mem_domain_remove_partition() to return errors instead of
asserting when errors are encountered. This gives the application
chance to recover.

The arch_mem_domain_parition_add()/_remove() will be modified
later together with all the other arch_mem_domain_*() changes
since the architecture code for partition addition and removal
functions usually cannot be separately changed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Daniel Leung fb91ce2e21 kernel: mem_domain: init function to return error values
This changes k_mem_domain_init() to return error values
instead of asserting when errors are encountered.
This gives applications a chance to recover if needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Krzysztof Chruscinski c0808e3f59 logging: Minimal mode configuration cleanup
Remove LOG_MINIMAL kconfig option which was confusing
since LOG_MODE_MINIMAL existed. LOG_MINIMAL was used to
force minimal mode but because of invalid dependencies
it was leading to issues.

Refactored code to use LOG_MODE_MINIMAL everywhere and
renamed LOG_MINIMAL to LOG_DEFAULT_MINIMAL which has impact
on defualt logging mode (which still can be later changed
in conf file or in menuconfig).

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-11-20 11:58:40 -05:00
Flavio Ceolin 7dd4297214 pm: Remove unused parameter
The number of ticks on z_pm_save_idle_exit is not used and there is
no need to have it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-17 11:15:49 -05:00
Torbjörn Leksell 86d8b36955 Tracing: k_free tracing hook heap reference added
Added heap reference parameter to k_free tracing
hook to allow tracing of the pointer which was
passed as a parameter to a k_free call.
As part of this update the defines
(for this hook) in the various tracing formats
was also updated.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-11-16 09:45:01 -05:00
Jordan Yates 44b8a0d199 scripts: gen_handles.py: remove size restrictions
With `gen_handles.py` now running on the first pre-built image,
`zephyr_pre0.elf` there is no requirement for the device handle arrays
to remain the same size after processing.

Remove the padding generated in `gen_handles.py`, as well as the
temporary option `CONFIG_DEVICE_HANDLE_PADDING` which was added to work
around this issue.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2021-11-16 10:41:59 +01:00
Lixin Guo 6aa783ed6a kernel: mmu: exclude some funcs from coverage
page_frame_dump() and z_page_frames_dump() are used for
debug print, so there is no need to cover those funcs.
__weak function is also excluded, every test overrides it.

Signed-off-by: Lixin Guo <lixinx.guo@intel.com>
2021-11-12 11:57:50 -05:00
Andy Ross 410f911018 kernel/sched: Separate idle from app thread stats in THREAD_USAGE
It turns out that we have a sample (though not a test) that really
does want to use "k_thread_runtime_stats_all_get()" to measure system
uptime.

Instead of breaking this needlessly, separate the accounting for idle
and non-idle threads.  The legacy API can report their sum, and the
more useful value is available via the kernel struct for future
analysis.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross f169c5bc13 kernel: Swap RUNTIME_STATS implementation
Clean up RUNTIME_STATS to separate the API from the individual data
backends.  Use the SCHED_THREAD_USAGE tracking instead of the original
for execution_cycles.  Move the kconfig for that into the runtime
stats menu, since it's part of the family now.

Also remove a lot of needless #if's around the declarations.  Unused
structs and uncalled functions don't need to be explicitly hidden.  An
attempt to access a non-existent field (e.g. "execution_cycles" if
that isn't configured) provides all the build time validation we need.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 52351458f4 kernel/sched: Add timing.h support to thread_usage
The runtime stats feature has always supported this, so use the same
kconfig to indirect the timing source in the same way.

(Personally I'm not a fan of the "timing" API, which really doesn't do
anything that the existing core "cycles" API does not except add a
bunch of code due to the separate implementation of frequency
management and conversion routines.  It comes from an era where
"cycles" were fixed to a MHz frequency clock on platforms like x86 yet
we had benchmarks that wanted to use the TSC.  Those days are behind
us and "cycles" can be fast everywhere.)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross b62d6e17a4 kernel/sched: Add an optional "all" counter for thread_usage
Tally the runtime of all non-idle threads.  Make it optional via
kconfig to avoid overhead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 4ae3250301 sched: Hook SCHED_USAGE from existing tracing hook
On older architectures, we don't have the
architecture-independent/scheduler-internal hooks (which require
USE_SWITCH) but there is a hook shared by the tracing layer we can use.

This is sort of a layering violation (stat tracking is a core feature,
tracing is supposed to be optional), but simple and lightweight.  And
eventually it will go away as these architectures migrate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Andy Ross 40d12c142d kernel/sched: Add "thread_usage" API for thread runtime cycle monitoring
This is an alternate backend that does what THREAD_RUNTIME_STATS is
doing currently, but with a few advantages:

* Correctly synchronized: you can't race against a running thread
  (potentially on another CPU!) while querying its usage.

* Realtime results: you get the right answer always, up to timer
  precision, even if a thread has been running for a while
  uninterrupted and hasn't updated its total.

* Portable, no need for per-architecture code at all for the simple
  case. (It leverages the USE_SWITCH layer to do this, so won't work
  on older architectures)

* Faster/smaller: minimizes use of 64 bit math; lower overhead in
  thread struct (keeps the scratch "started" time in the CPU struct
  instead).  One 64 bit counter per thread and a 32 bit scratch
  register in the CPU struct.

* Standalone.  It's a core (but optional) scheduler feature, no
  dependence on para-kernel configuration like the tracing
  infrastructure.

* More precise: allows architectures to optionally call a trivial
  zero-argument/no-result cdecl function out of interrupt entry to
  avoid accounting for ISR runtime in thread totals.  No configuration
  needed here, if it's called then you get proper ISR accounting, and
  if not you don't.

For right now, pending unification, it's added side-by-side with the
older API and left as a z_*() internal symbol.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Krzysztof Chruscinski 8979fbc906 kernel: timer: Call user handler without spinlock
Add spinlock unlocking before calling timer expiration
handler. Locking was introduced by dde3d6c.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-11-08 11:05:49 -05:00
Flavio Ceolin 9444480c7b pm: Better return type for pm_system_suspend
Instead of returning PM_STATE_ACTIVE for when the cpu didn't enter a
low power state and a different state when it entered, but has
already left the state and is active again, it changes
pm_system_suspend to return true when the cpu has entered a low power
state and false otherwise.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-06 10:21:53 -04:00
Andy Ross 7dee7a6139 kernel/sched: Fix race with thread return values
There was a brief (but seen in practice on real apps on real
hardware!) race with the switch-based z_swap() implementation.  The
thread return value was being initialized to -EAGAIN after the
enclosing lock had been released.  But that lock is supposed to be
atomic with the thread suspend.

This opened a window for another racing thread to come by and "wake
up" our pending thread (which is fine on its own), set its return
value (e.g. to 0 for success) and then have that value clobbered by
the thread continuing to suspend itself outside the lock.

Melodramatic aside: I continue to hate this
arch_thread_return_value_set() API; it needs to die.  At best it's a
mild optimization on a handful of architectures (e.g. x86 implements
it by writing to the EAX register save slot in the context block).
Asynchronous APIs are almost always worse than synchronous ones, and
in this case it's an async operation that races against literal
context switch code that can't use traditional locking strategies.

Fixes #39575

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-10-25 12:31:06 +02:00
Chris Reed f8f3c00d07 kernel: thread.c: remove unused THREAD_COOKIE macro.
This macro was not reference anywhere.

Signed-off-by: Chris Reed <chris.reed@arm.com>
2021-10-22 18:08:56 -04:00
Peter Mitsis ae394bff7c kernel: add support for event objects
Threads may wait on an event object such that any events posted to
that event object may wake a waiting thread if the posting satisfies
the waiting threads' event conditions.

The configuration option CONFIG_EVENTS is used to control the inclusion
of events in a system as their use increases the size of
'struct k_thread'.

Signed-off-by: Peter Mitsis <peter.mitsis@intel.com>
2021-10-16 06:27:10 -04:00
Chris Reed ab4d69baf3 kernel: work_q: use flags_get() in work_delayable_busy_get_locked().
The k_work::flags field is not an atomic_t and would cause
-Wpointer-sign warning on some compilers. This function was the only
one in work.c to use atomic_get() so there is no benefit to atomicity.

Signed-off-by: Chris Reed <chris.reed@arm.com>
2021-10-16 06:23:46 -04:00
Neil Armstrong 2f359aeacf mmu: fix virt_region_alloc() unused region free when aligned
In the case where the aligned memory range is on top of the allocated
memory range, freeing the 0 sized top unused memory will trigger
an assert in the virt_region_free() call since vaddr could be equal
to Z_VIRT_REGION_END_ADDR.

Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-13 06:24:56 -04:00
Neil Armstrong 7830f87ccd mmu: get virtual alignment from physical address
On ARM64 platforms, when mapping multiple memory zones with size
not multiple of a L2 block size (2MiB), all the following mappings
will probably use L3 tables.

And a huge mapping will consume all possible L3 tables.

In order to reduce usage of L3 tables, this introduces a new
arch_virt_region_align() optional architecture specific
call to eventually return a more optimal virtual address
alignment than the default MMU_PAGE_SIZE.

This alignment is used in virt_region_alloc() by:
- requesting more pages in virt_region_bitmap to make sure we request
  up to the possible aligned virtual address
- freeing the supplementary pages used for alignment

Suggested-by: Nicolas Pitre <npitre@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
2021-10-11 21:00:28 -04:00
Stefan Eicher fbe7d7297e comments: minor typo fixes
Fixing minor typos in comments

Signed-off-by: Stefan Eicher <stefan.eicher@ypsomed.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-10-05 07:18:13 -04:00
Martí Bolívar a627666e06 device: add fudge factor for handle padding
When CONFIG_USERSPACE is enabled, the ELF file from linker pass 1 is
used to create a hash table that identifies kernel objects by address.
We therefore can't allow the size of any object in the pass 2 ELF to
change in a way that would change those addresses, or we would create
a garbage hash table.

Simultaneously (and regardless of CONFIG_USERSPACE's value),
gen_handles.py must transform arrays of handles from their pass 1
values to their pass 2 values; see the file's docstring for more
details on that transformation.

The way this works is that gen_handles.py just pads out each pass 2
array so its length is the same as its pass 1 value. The padding value
is a repeated run of DEVICE_HANDLE_ENDS values. This value is the
terminator which we look for at runtime in places like
device_required_handles_get(), so there must be at least one, and we
error out in gen_handles.py if there's no room in the pass 2 array for
at least one such value. (If there is extra room, we just keep
inserting extra DEVICE_HANDLE_ENDS values to pad the array to its
original length.)

However, it is possible that a device has more direct dependencies in
the pass 2 handles array than its corresponding devicetree node had in
the pass 1 array. When this happens, users have no recourse, so that's
a potential showstopper.

To work around this possibility for now, add a new config option,
CONFIG_DEVICE_HANDLE_PADDING, whose value defaults to 0.

When nonzero, it is a count of padding handles that are inserted into
each device handles array. When gen_handles.py errors out due to lack
of room, its error message now tells the user how much to increase
CONFIG_DEVICE_HANDLE_PADDING by to work around the problem.

It looks like a real fix for this is to allocate kernel objects whose
addresses are required for hash tables in CONFIG_USERSPACE=y
configurations *before* the handle arrays. The handle arrays could
then be resized as needed in pass 2, which saves ROM by avoiding
unnecessary padding, and would avoid the need for
CONFIG_DEVICE_HANDLE_PADDING altogether.

However, this 'real fix' is not available and we are facing a deadline
to get a temporary solution in for Zephyr v2.7.0, so this is a good
enough workaround for now.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2021-09-30 21:37:59 -04:00
Martí Bolívar b94a11f5d0 Revert "device: supported devices visitor API"
This reverts commit b01e41ccdd.

It's not clear that the supported devices are being properly computed,
so let's revert this for v2.7.0 until we've had more time to think
it through.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2021-09-30 21:37:59 -04:00
Chen Peng1 dde3d6ca9b timer: mask interrupts in timer's timeout handler.
before running timer's timeout function, we need to make
sure that those threads waiting on this timer have been
added into the timer's wait queue, so add operations to
use timer lock to mask interrupts in z_timer_expiration_handler
function to synchronize timer's wait queue.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-09-29 09:18:12 -04:00
Andy Ross b11e796c36 kernel/sched: Add CONFIG_CPU_MASK_PIN_ONLY
Some SMP applications have threading designs where every thread
created is always assigned to a specific CPU, and never want to
schedule them symmetrically across CPUs under any circumstance.

In this situation, it's possible to optimize the run queue design a
bit to put a separate queue in each CPU struct instead of having a
single global one.  This is probably good for a few cycles per
scheduling event (maybe a bit more on architectures where cache
locality can be exploited) in circumstances where there is more than
one runnable thread.  It's a mild optimization, but a basically simple
one.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross b155d06712 kernel/sched: Factor out ready_q initialization
Split "init_ready_q()" into a separate function that operates on the
queue pointer and not the global kernel object.  Pure refactoring.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross 387fdd2e53 kernel/sched: Refactor/simplify run queue accessors
Similar to the previous patch, the various _priq_run_*() functions are
always passed a first argument that is the singleton system run queue
(this is because the same backend functions are used by wait queues).

Refactor into a simpler API that places the access to the run queue in
just a single spot.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Andy Ross c230fb3580 kernel/sched: Simply de/queue_thread()
Pure refactoring.  For historical reasons these two functions took a
first argument (a pointer to the run queue) that was always the same.
Eliminate it.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-28 20:15:05 -04:00
Krzysztof Chruscinski 76c309db7b kernel: thread: Fix thread runtime stats
Adding missing parenthesis. Without them wrong results
appeared when k_cycle_get_32 wrapped.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-09-23 11:31:02 -04:00
Chen Peng1 0f63d1135c cmsis_rtos_v1: fix thread instances management.
add a bitarray into struct osThreadDef_t to indicate whether the
thread is used or not, then we can get the first available thread
by searching this array when creating a new thread, and update this
array to add a free thread when terminating a thread.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-09-09 12:01:06 -04:00
Andy Ross 0d763e0a10 cmake/compiler/xcc: sched: Support XCC inlining semantics
Cadence XCC is based off of a very old 4.2 gcc compiler, which didn't
perfectly support C99 "inline" semantics with respect to
cross-translation-unit inline linkage (which Zephyr does not use, our
inlines are static only) and declaration order.

Fix the one spot where we were calling an inline before its
ALWAYS_INLINE definition, and add a flag to suppress the warning so
CI's trying to build with XCC and -Werror don't flip out.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-08 09:28:31 -04:00
Christopher Friedt 86f94ebaa6 kernel: init: remove empty lcov exclusion
Remove the empty lcov exclusion left over from the weak main
function.

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2021-09-06 08:18:15 -04:00
Daniel Leung 049e3bac73 kernel: add -ENOTSUP doc to arch_float_en-/dis-able()
Some architectures already returns -ENOTSUP when these functions
are called. So add this return value to the API doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-09-03 10:00:02 -04:00
Andy Ross c6d077e1bc soc: intel_adsp/cavs_v25: Add CPU halt and relaunch APIs
Add a SOC API to allow for application control over deep idle power
states.  Note that the hardware idle entry happens out of the WAITI
instruction, so the application has to be responsibile for ensuring
the CPU to be halted actually reaches idle deterministically.  Lots of
warnings in the docs to this effect.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-09-03 07:19:34 -04:00
Stephanos Ioannidis a77e7566a8 kernel: Remove unused timeout_q from z_kernel
This commit removes the `timeout_q` from the `struct z_kernel` since it
is no longer used.

Note that the new kernel timeout implementation introduced in the
commit 987c0e5fc1 uses `timeout_list`
global variable in place of it.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2021-09-01 19:51:08 -04:00
Torsten Rasmussen 25e1b12ec0 kernel: extract __weak main() into independent file
To support arm-ds / armlink it is required that the weak main is located
in an object externally to the object using the weak symbol.

If the weak symbol is inside the object referring to it, then the weak
symbol will be used and this will result in
```
Error: L6200E: Symbol __ARM_use_no_argv multiply defined
    (by init.o and main.o).
```
as both the weak and strong symbols are used.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen a28830b811 linker: align __itcm_load_start / __dtcm_data_load_start linker symbols
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.

Generally, start and end symbols uses the following pattern, as:
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end

However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end
Section size symbol:      __foo_size
Section LMA start symbol: __foo_load_start

This commit aligns the symbols for __itcm_load_start and
__dtcm_data_load_start to other symbols and in such a way they follow
consistent pattern which allows for linker script and scatter file
generation.

The symbols are named according to the section name they describe.
Section names are itcm and dtcm.

The following symbols are aligned in this commit:
-  __itcm_rom_start      -> __itcm_load_start
-  __dtcm_data_rom_start -> __dtcm_data_load_start

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen 510d7dbfb6 linker: align _ramfunc_ram/rom_start/size linker symbol names
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.

Generally, start and end symbols uses the following pattern, as:
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end

However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end
Section size symbol:      __foo_size
Section LMA start symbol: __foo_load_start

This commit aligns the symbols for _ramfunc_ram/rom to other symbols and
in such a way they follow consistent pattern which allows for linker
script and scatter file generation.

The symbols are named according to the section name they describe.
Section name is `ramfunc`

The following symbols are aligned in this commit:
-  _ramfunc_ram_start  -> __ramfunc_start
-  _ramfunc_ram_end    -> __ramfunc_end
-  _ramfunc_ram_size   -> __ramfunc_size
-  _ramfunc_rom_start  -> __ramfunc_load_start

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Torsten Rasmussen 65a2de84a9 linker: align __data_ram/rom_start/end linker symbol names
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each section,
and sometimes even size and LMA start symbols.

Generally, start and end symbols uses the following pattern, as:
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end

However, this pattern is not followed consistently.
To allow for linker script generation and ensure consistent naming of
symbols then the following pattern is introduced consistently to allow
for cleaner linker script generation.
Section name:             foo
Section start symbol:     __foo_start
Section end symbol:       __foo_end
Section size symbol:      __foo_size
Section LMA start symbol: __foo_load_start

This commit aligns the symbols for _data_ram/rom to other symbols and in
such a way they follow consistent pattern which allows for linker script
and scatter file generation.

The symbols are named according to the section name they describe.
Section name is `data`

A new group named data_region is introduced which instead spans all the
input and output sections that was previously covered by
__data_ram_start, __data_ram_end, and __data_rom_start.

The following symbols are aligned in this commit:
-  __data_ram_start  -> __data_region_start
-  __data_ram_end    -> __data_region_end
-  __data_rom_start  -> __data_region_load_start

The following new symbols are introduced so that the data section is
aligned with other sections:
-  __data_end
-  __data_start
       value identical to __data_region_start but describes start of
       the section.

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Gerard Marull-Paretas 9917de4a1b pm: fully initialize pm_device on Z_DEVICE_STATE_DEFINE
The initialization of the struct pm_device pm field found in the device
state can be statically initialized without the need of doing it at
runtime.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-08-27 09:06:51 -04:00
Daniel Leung 5c4fff3998 kernel: kheap: make init work with demand paging
With demand paging, the heap object and its backing memory
may not be in physical memory. So initialize those heaps
in pinned region at PRE_KERNEL_1 and the remaining heaps
once paging mechanism has been initialized.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 2dfae4a0f7 kernel: demand_paging: allow reserving page frames
This adds the kconfig to allow reserving a number of page frames
which do not count towards free memory. This is to ensure that
there are enough page frames available for paging code and data.
Or else, it would be possible to exhaust all page frames via
anonymous memory mappings.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 2117a2a44b kernel: app_smem: allowing pinning memory partitions
This allows memory partitions to be put into the pinned
section so they are available during boot. For example,
the stack guard (in libc partition) is needed during boot
but before the paging mechanism is initialized. Without
pinning it in physical memory, it would fault in early
boot process.

A new cmake property app_smem,pinned_partitions is
introduced so that additional partitions can be pinned
if needed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung e88afd2c37 kernel: mmu: pin/unpin boot sections during boot process
During boot process, the boot sections need to be pinned in
memory to prevent them from being paged out (to avoid
pages being paged out and immediately paged in again).
Once the boot process is completed (just before calling main()),
the boot sections can be unpinned so the memory can be
used for demand paging for paging in data pages.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung f32ea4433c kernel: demand_paging: clear BSS after paging is initialized
If BSS section is not present in memory at boot, it would not
have been cleared as the data pages are not in physical memory.
Manipulating those pages would result in page faults.
In this scenario, zeroing BSS can only be done once the paging
mechanism has been initialized. So do it there.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung 7771d27525 kernel: mmu: move when page fault is counted
The beginning of code in do_page_fault() is to pin the page
in memory if it is already present in physical memory.
It is there so that if a page is not present, it can proceed
to perform page-in and then pin it. So the counting of
page faults needs to be moved after the pinning code so
it actually counts page faults, and not counting pinning
operations when the page is already present.

Also clarify the comment on the goto statement as it is not
correct.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung ebbfde9742 kernel: move z_main_stack into pinned section
The z_main_stack is needed before paging mechanism is initialized
so put the stack into the pinned section to avoid page faults.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung c38634fa33 kernel: mmu: fix assigning unaligned addr to page frame
In do_page_fault(), the incoming page fault address is not
aligned, and it was unconditionally assigned to the page
frame virtual address field. If the backing store simply
returns the virtual address without processing in
k_mem_paging_backing_store_location_get(), this unaligned
address will be passed to arch_mem_page_out(). On x86,
it is further passed to range_map() which asserts if
the physical address is not page aligned. So align
the address to page size before assigning it to the page
frame virtual address field.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Flavio Ceolin d9aa414831 kernel: work_q: Add an init function
k_work_queue_start receives a struct that is expected to be
uninitialized (zeroed). Otherwise the behavior is undefined.

Following the Zephyr semantics, this pr introduce a new init function
for this struct.

Fixes #36865

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-08-25 22:07:04 -04:00
Jordan Yates b01e41ccdd device: supported devices visitor API
Adds an API to query and visit supported devices. Follows the example
set by the required devices API.

Implements #37793.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2021-08-25 19:43:54 -04:00
Daniel Leung 45f549dee9 kernel: allow linking a prebuilt library instead of compiling
This adds a very primitive logic to allow linking a prebuilt
static library of kernel code instead of building the kernel
from source. Note that the library is built with a specific
set of kconfigs, and they must match when building applications,
or else there would be mysterious crashes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-25 18:01:14 -04:00
Naiyuan Tian bc3fda491f kernel: userspace: fix typo in the comments
While reading the code, found some typos in the code comments,
line 226 and 668.
Fix comments to make it more solid.

Signed-off-by: Naiyuan Tian <naiyuan.tian@intel.com>
2021-08-24 07:31:49 -04:00
Maksim Masalski 8ed9cddbc5 lib: add explanation comment in "work->handler" case
Coding scanning tool raises a violation that happens dereferencing
of the "work" pointer in the expression "work->handler"

As were discussed before in PR #35664 it is not true.
Add explanation comment, because static code analysis tool
raised false-positive violation there.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-08-20 18:48:16 -04:00
Fabio Baltieri 7f36f90167 kernel: drop unused priority related definitions
There are no reference for either K_NUM_PRIORITIES or
K_NUM_PRIO_BITMAPS, with the former being dropped in:

  1acd8c2996 kernel: Scheduler rewrite

Dropping both of these, and also the two comments about extra priorities
taking extra RAM space, as those do not seem to apply either.

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2021-08-17 17:52:17 -04:00
Fabio Baltieri f88a420d69 toolchain: migrate iterable sections calls to the external API
This migrates all the current iterable section usages to the external
API, dropping the "Z_" prefix:

Z_ITERABLE_SECTION_ROM
Z_ITERABLE_SECTION_ROM_GC_ALLOWED
Z_ITERABLE_SECTION_RAM
Z_ITERABLE_SECTION_RAM_GC_ALLOWED
Z_STRUCT_SECTION_ITERABLE
Z_STRUCT_SECTION_ITERABLE_ALTERNATE
Z_STRUCT_SECTION_FOREACH

Signed-off-by: Fabio Baltieri <fabiobaltieri@google.com>
2021-08-12 17:47:04 -04:00
Flavio Ceolin 8eceeee798 pm: device: Add wakeup source API
Introduce a new API to allow devices capable of wake up the system
register themselves was wake up sources. This permits applications to
select the most appropriate way to wake up the system when it is
suspended.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-08-11 19:39:13 -04:00
Evgeniy Paltsev 497cb2e587 CPP: fix static objects init for MWDT toolchain
The constructors of static objects are stored in ".ctors"
section. In case of MWDT toolchain we have incompatible
".ctors" section format with GNU toolchain. So let's use
initialization code provided by MWDT instead of Zephyr one
in case of MWDT toolchain usage.

As it is done for GNU toolchain We call constructors of
static objects but we don't call destructors for them.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2021-08-09 22:47:22 -04:00
Gerard Marull-Paretas c2cf1ad203 pm: device: remove usage of local states
The device PM subsystem already holds the device state, so there is no
need to keep duplicates inside the device. The pm_device_state_get has
been refactored to just return the device state. Note that this is still
not safe, but the same applied to the previous implementation. This
problem will be addressed later.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-08-04 08:23:01 -04:00
Flavio Ceolin f2e323f06c futex: Avoid unnecessary lock
It is not necessary the spin lock to protect atomic operation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-07-30 20:21:04 -04:00
Andrew Boie f07df42d49 kernel: make k_current_get() work without syscall
We cache the current thread ID in a thread-local variable
at thread entry, and have k_current_get() return that,
eliminating system call overhead for this API.

DL: changed _current to use z_current_get() as it is
    being used during boot where TLS is not available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-30 20:16:47 -04:00
Gerard Marull-Paretas 70322853a8 pm: device: move device busy APIs to pm subsystem
The following device busy APIs:

- device_busy_set()
- device_busy_clear()
- device_busy_check()
- device_any_busy_check()

were used for device PM, so they have been moved to the pm subsystem.
This means they are now prefixed with `pm_` and are defined in
`pm/device.h`.

If device PM is not enabled dummy functions are now provided that do
nothing or return `-ENOSYS`, meaning that the functionality is not
available.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-07-30 09:28:42 -04:00
Guennadi Liakhovetski 339a6bdafb kernel: fix several typos in a comment in timeout.c
Fix several simple typos.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-07-23 16:06:54 -04:00
Guennadi Liakhovetski 45b70e1500 smp: limit the scope of some SMP-only functions
z_smp_init() is only available if CONFIG_SMP is defined,
smp_timer_init() also depends on two Kconfig parameters. Also make it
conditional in cavs_timer.c. Also clarify some SMP-related comments
there.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-07-23 16:06:54 -04:00
Daniel Leung dbc0be487f kernel: use proper macro to declare extern interrupt stacks
The z_interrupt_stacks was declared extern in the kernel internal
header file using the same macro which defines the same stack
array but with an added "extern" in front. This macro adds
alignment and section attribute which are actually not the same
as the actual stack array defined in kernel/init.c. The section
name used in the section attribute contains the file name where
the stack array is defined or extern declared. So the same
symbol, in this case z_interrupt_stacks, has different
attributes in two places, and GCC 11 starts to complain about
this. So use the newly introduced macro to extern declare
the stack array without adding/replacing any symbol attributes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-22 07:24:11 -05:00
David Palchak 043507dbd6 kernel: native_posix: Don't run global C++ constructors
On the native_posix board global object constructors
are run by the underlying OS runtime init prior to
Zephyr kernel init. Thus Zephyr should not run global
object constructors a second time. Doing so breaks
application behavior that relies on global
constructors doing work that must be done only once.
See bug #36858 for more information.

Signed-off-by: David Palchak <palchak@google.com>
2021-07-12 19:51:16 -04:00
Chih Hung Yu 0ef77d4ea4 kernel: Fix negative mutex lock_count value
If you try to unlock an unlocked mutex, it will incorrectly
succeeds and decreases the lock count to -1.

Fixes #36572

Signed-off-by: Chih Hung Yu <chyu313@gmail.com>
2021-07-06 19:19:41 -04:00
Anas Nashif 8b3f36c656 kernel: move internal headers into include/kernel
Move 2 headers that are internal to the kernel into include/kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-06-16 20:38:55 -04:00
Jennifer Williams ae85da1c90 kernel: poll: fix coding guideline 15.7 missing comment
The final else {} in the if...else if is missing required
comment (non-empty, ';' is not sufficient). This adds a comment
to comply with CG 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-06-04 16:22:50 -05:00
Maksim Masalski e9600a75fc kernel: thread: add default case, remove unused break
According to the Zephyr Coding Guideline all switch statements
shall be well-formed. Add a default case with break and comment
to avoid static analysis tool to raise a violation that there is no
default case.
Also, I think, in all cases above no need to use "break",
because they already are using "return".

Found as a coding guideline violation (MISRA R16.1) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-04 16:21:01 -05:00
Maksim Masalski 78ba2ec830 coding guidelines: add to function prototypes form named parameters
Function types shall be in prototype form with named parameters

Found as a coding guideline violation (MISRA R8.2) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-04 16:20:06 -05:00
Torbjörn Leksell 70d721c1bb Tracing: Incorrect Unlock Mutex Trace Hook Fix
Changed location of the last k_mutex_unlock trace hook since it was
being called after k_sched_unlock, which could result in tracing
scenarios (other thread waiting for lock) where it appeared that a
mutex was being locked again before becoming unlocked.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-06-03 07:10:05 -05:00
Ioannis Glaropoulos dd574f6ec7 kernel: stack_sentinel: disable in single-threaded builds
Add a dependency on MULTITHREADING for the
STACK_SENTINEL feature, so it may not get
enabled in single-thread Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-28 10:41:46 -05:00
Daniel Leung dfa4b7e375 kernel: mmu: z_backing_store* to k_mem_paging_backing_store*
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Daniel Leung 31c362d966 kernel: mmu: rename z_eviction* to k_mem_paging_eviction*
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Lauren Murphy 4c85b4606b kernel: k_sleep: fix return value for absolute timeout
Fixes calculation of remaining ticks returned from z_tick_sleep
so that it takes absolute timeouts into account.

Fixes #32506

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-05-26 18:11:52 -05:00
Ioannis Glaropoulos 4084242a71 kernel: make MULTITHREADING promptless if single-thread not supported
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-05-26 11:03:22 -05:00
Flavio Ceolin 97281b3862 pm: device_runtime: get rid of the spinlock
Protect critical sections using the mutex.
The mutex is required to use the conditional variable and since we
need to atomically check the pm state and the workqueue before wait
the condition, it is necessary to protect them using the same mutex.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-26 10:56:55 -04:00
Flavio Ceolin 378a19d2a8 pm: device_runtime: Add helper to wait on async ops
Add a function that properly uses a mutex to check a condition before
wait on the conditional variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-26 10:56:55 -04:00
Maksim Masalski 970820e92d sched: create unique function name
In file include/kernel/thread.h in "struct _thread_base" is a member
called "_wait_q_t *pended_on"
At the same time in file kernel/sched.c is function called
"static _wait_q_t *pended_on()"

Coding scanning tool assigns violation (MISRA R5.9) that static
object reused, because thread.h is included in struct.c file.

I think we can rename function to avoid misreading in the future.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-25 19:06:21 -04:00
Andrzej Głąbek 59b21a29aa kernel: timeout: Fix adding of an absolute timeout
Correct the way the relative ticks value is calculated for an absolute
timeout. Previously, elapsed() was called twice and the returned value
was first subtracted from and then added to the ticks value. It could
happen that the HW counter value read by elapsed() changed between the
two calls to this function. This caused the test_timeout_abs test case
from the timer_api test suite to occasionally fail, e.g. on certain nRF
platforms.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2021-05-24 23:53:18 -04:00
Andy Ross 851d14afc8 kernel/sched: Remove "cooperative scheduling only" special cases
The scheduler has historically had an API where an application can
inform the kernel that it will never create a thread that can be
preempted, and the kernel and architecture layer would use that as an
optimization hint to eliminate some code paths.

Those optimizations have dwindled to almost nothing at this point, and
they're now objectively a smaller impact than the special casing that
was required to handle the idle thread (which, obviously, must always
be preemptible).

Fix this by eliminating the idea of "cooperative only" and ensuring
that there will always be at least one preemptible priority with value
>=0.  CONFIG_NUM_PREEMPT_PRIORITIES now specifies the number of
user-accessible priorities other than the idle thread.

The only remaining workaround is that some older architectures (and
also SPARC) use the CONFIG_PREEMPT_ENABLED=n state as a hint to skip
thread switching on interrupt exit.  So detect exactly those platforms
and implement a minimal workaround in the idle loop (basically "just
call swap()") instead, with a big explanation.

Note that this also fixes a bug in one of the philosophers samples,
where it would ask for 6 cooperative priorities but then use values -7
through -2.  It was assuming the kernel would magically create a
cooperative priority for its idle thread, which wasn't correct even
before.

Fixes #34584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-24 23:38:16 -04:00
Maksim Masalski d6c9d40ee0 userspace: remove dead code
File userspace.c contains dead code in function char *otype_to_str()
Remove "return NULL" and replace with "ret = NULL".

Found as a coding guideline violation (MISRA R2.1) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-24 22:35:03 -04:00
Armando Visconti 4b4068c948 kernel/device: add arg checking in z_device_ready()
If this call receives an invalid device pointer as argument it
assumes that the `device` is not ready for usage.

This routine is currently called by two device specific APIs:

    - device_usable_check(const struct device *dev)
    - device_is_ready(const struct device *dev)

The device-specific APIs documentation claims that these two
routines must be called with a device pointer captured from
DEVICE_DT_GET(). So passing NULL is a violation of the rule.

Nevertheless, is quite common in drivers to assign NULL to
a device pointer if the corresponding DT property has not been
found (e.g. a not used gpio interrupt declaration for a given
device instance) and seems legit to interpret this condition
same as the device is not ready for usage.

Signed-off-by: Armando Visconti <armando.visconti@st.com>
2021-05-18 11:29:46 -05:00
Peter Bigot 656c09589a kernel: work: fix race condition with cancel before work runs
The original state management solution involved separate locks for a
work queue and each work item.  To avoid inter-lock dependencies a
window was left between the point where the work item was removed from
the queue (protected by queue lock) and the point where the work item
state was updated to mark the work item running.

This introduced a bug: If a cancellation was issued during this window
it would succeed, and the work item would appear to be idle even
though in fact the work queue thread was about to run it.

Since there is now only one lock, move the work item state updates
into the mutex regions associated with dequeuing the work item and
clearing the work queue busy flag.

Note that removing the window between queue and work mutex regions
eliminates the potential of having a dequeued work item be cancelled
before its QUEUED flag is cleared, simplifying the work item state
update.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-05-18 15:02:08 +02:00
Maksim Masalski 929956df70 coding guidelines rule 14_3_j: add explicit case check
Violation of the [MISRAC2012-RULE_14_3-j]:
Boolean operations whose results are invariant
shall not be permitted

Probably in that part of code is a misprint.
Added to check _OBJ_INIT_FALSE case explicitly

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-05-18 08:36:57 -04:00
Anas Nashif 7b9084cb4f ztest: set thread name to test name
Use the actual test name and not a hardcoded thread name. This is useful
for tracing test cases.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-17 18:45:57 -04:00
Andy Ross bd077561d0 kernel/swap: Add assertion to catch lock-breaking context switches
Our z_swap() API takes a key returned from arch_irq_lock() and
releases it atomically with the context switch.  Make sure that the
action of the unlocking is to unmask interrupts globally.  If
interrupts would still be masked then that means there is an OUTER
interrupt lock still held, and the code that locked it surely doesn't
expect the thread to be suspended and interrupts unmasked while it's
held!

Unfortunately, this kind of mistake is very easy to make.  We should
catch that with a simple assertion.  This is essentially a crude
Zephyr equivalent of the extremely common "BUG: scheduling while
atomic" error in Linux drivers (just google it).

The one exception made is the circumstance where a thread has already
aborted itself.  At that stage, whatever upthread lock state might
have existed will have already been messed up, so there's no value in
our asserting here.  We can't catch all bugs, and this can actually
happen in error handling and/or test frameworks.

Fixes #33319

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-17 15:27:37 -04:00
Daniel Leung 216dc5ddfe kernel: mmu: remove un-needed call to virt_to_bitmap_offset
When marking the reserved region at the end of virtual address
space, call virt_to_bitmap_offset() is not needed as we already
know the offset. So remove it.

Coverity-CID: 235930
Fixes #35160

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-13 09:00:54 -05:00
Daniel Leung 660d1478c6 kernel: init.c: tag source for boot/pinned sections
This adds the tags for functions and variables so they
can be put into boot/pinned sections.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung 1310ad6b0e linker: add bits for pinned regions
This adds the necessary bits for linker scripts and source code
to specify which symbols need to be pinned in memory. This is
needed for demand paging as some functions and data must reside
in memory all the time and cannot be paged out (e.g. paging,
scheduler, and interrupt routines for functionality).

This is up to the arch/SoC/board to define the sections in
their linker scripts as the pinned section may need special
alignment which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung d812728ec4 linker: add bits for boot regions
This adds the necessary bits for linker scripts and source code
to specify which symbols are needed for boot process so they
can be grouped together.

One use of this is to group boot related code and data so these
won't interval with other kernel and application for better
caching.

This is a must for demand paging as some functions and data
must be available during the boot process and before the memory
manager is initialized. During this time, paging cannot be used
so symbols linked in virtual memory space are unavailable.

This is up to the arch/SoC/board to define the sections in
their linker scripts as section may need special alignment
which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Carlo Caione f000695243 cache: Rename sys_{dcache,icache}_* to sys_{data,instr}_cache_*
To have a common prefix.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Carlo Caione e2333269ae cache: Introduce external cache controller system support
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.

Rework the cache code introducing support for external cache
controllers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Anas Nashif 4d994af032 kernel: remove object tracing
Remove this intrusive tracing feature in favor of the new object tracing
using the main tracing feature in zephyr. See #33603 for the new tracing
coverage for all objects.

This will allow for support in more tools and less reliance on GDB for
tracing objects.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 7a646b3f8e Tracing: Work Queue tracing
Add Work tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell cae9a905d4 Tracing: Poll API and Work Poll tracing
Add Poll API and Work Poll tracing, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 3a66d6c695 Tracing: Timer tracing
Add Timer tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 65b376eb87 Tracing: Memory Slab tracing
Add memory slab tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 80cd9dac22 Tracing: Memory Heap tracing
Add Memory heap tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell fa9e64b304 Tracing: Pipe tracing
Add Pipe tracing, default trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell d2e7de522d Tracing: Mailbox tracing
Add Mailbox tracing, default tracing hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 9ab447b3de Tracing: Message Queue tracing
Add Message Queue tracing, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 69e8869127 Tracing: Memory Stack tracing
Add memory stack tracing, defaul trace hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell f984823e0d Tracing: Queue tracing
Add Queue tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell f17144349b Tracing: Thread tracing
Add thread tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell b93ff29e4b Tracing: Conditional variable tracing
Add conditional variable tracing hooks, default tracing hooks,
and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell ed6148a841 Tracing: Mutex tracing hooks
Add mutex trace hooks, default mutex trace hooks, and trace hook
documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell 82addd6a64 Tracing: Semaphore tracing documentation
Add default semaphore trace hooks and documentation into
tracing.h.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-05-07 22:10:21 -04:00
Torbjörn Leksell fcf2fb6320 Tracing: Semaphore tracing
Add semahphore tracing using the new trace macros.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Flavio Ceolin 54324fd08e power: device_pm: Use spin lock instead of semaphore
Device pm runtime was using semaphore to protect critical section but
enable / disable functions were waiting on the semaphore. So, just
replace it with a spin lock.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-07 16:55:31 -04:00
Flavio Ceolin 8705c688e2 power: device_pm: Fix concurrence issues
The sync API was using k_poll_signal and in certain conditions is
possible multiple threads waiting on a signal leading to an undefined
behavior.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-05-07 16:55:31 -04:00
Daniel Leung c31829074f kernel: mmu: use bitarrays for k_mem_map/k_mem_unmap
This uses bitarrays for allocating and deallocating virtual
addresses with k_mem_map() and k_mem_unmap(). This will
allow us to reuse virtual addresses.

Fixes #28900

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung c254c58184 kernel: mmu: add k_mem_unmap
This adds k_mem_unmap() so the memory mapped by k_mem_map()
can be freed.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung 085d3768e1 kernel: mmu: introduce arch_page_phys_get()
This adds a new function prototype for arch_page_phys_get()
which will be used to translate mapped virtual addresses back
to physical memory addresses. This is needed for the future
k_mem_unmap() function which requires this to find
the corresponding page frame. It is faster to look through
the page tables instead of doing linear search of the page
frame array.

A weak function is provided in case arch_page_phys_get()
is not implemented at the arch level. This simply goes
through all the page frame and find the one which has
mapped to the virtual address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung fe48f5a920 kernel: mmu: always use before/after guard pages for k_mem_map()
When we start allowing unmapping of memory region, there is no
exact way to know if k_mem_map() is called with guard page option
specified or not. So just unconditionally enable guard pages on
both sides of the memory region to hopefully catch access
violations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung e6df25f68c kernel: mmu: implement z_phys_unmap()
This provides a counterpart to z_phys_map() which can be used
to temporary map memory region during boot process, and
subsequently discards the mapping.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Guennadi Liakhovetski d7a3752915 work: remove a statement with no effect
work_timeout() is a function, a statement like "(void)work_timeout;"
has no effect.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-07 12:44:34 -04:00
Gerard Marull-Paretas d31a9be27c pm: device: rename device_pm struct to pm_device
Prefix all PM related functions/structures with pm.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Gerard Marull-Paretas 2c7b763e47 pm: replace DEVICE_PM_* states with PM_DEVICE_*
Prefix device PM states with PM.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Gerard Marull-Paretas 13f528bc59 kernel: replace power/power.h with pm/pm.h
Replace old header with the new one.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-05-05 18:35:49 -04:00
Jennifer Williams ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00
Guennadi Liakhovetski ced7866901 smp: move a preprocessor conditional from .c to cmake
smp.c only has to be built if CONFIG_SMP is enabled. Remove
preprocessor checks from the file itself and update cmake rules
instead.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-03 17:13:01 -04:00
Guennadi Liakhovetski 8d07b7751a smp: add a Kconfig option to delay booting secondary CPUs
Usually Zephyr boots all secondary CPUs as a part of system
boot. Some applications however need an ability to boot on
the main CPU only and enable secondary CPUs selectively at
run-time. Add a Kconfig option to support this behaviour.
When booting CPUs on demand applications also need helpers
to initialise a dummy thread and begin threaded execution
on those CPUs, add two such helpers.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
2021-05-03 17:13:01 -04:00
Anas Nashif 6df4405cca doc: fix typos
Fix various typos in the docs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-30 16:03:08 -04:00
Krzysztof Chruscinski 2165e8c585 Revert "kernel: Deprecate CONFIG_MULTITHREADING"
This reverts commit 28cb9dab64.
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski b85250108c kernel: Limit kernel files when CONFIG_MULTITHREADING=n
Avoid fetching files which use scheduler. By explicitly avoiding
including RTOS specific files we ensure that it is not fetched
accidently.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski 1ba23ca92b kernel: fatal: Avoid thread api access when no multithreading
Remove access to thread API when multithreading is off.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski c482a572d4 kernel: heap: Add support for CONFIG_MULTITHREADING=n
Ensure that k_heap is not attempt to block the thread when
timeout is set and space cannot be allocated.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski 3b4b7c3a37 kernel: mem_slab: Add support to no multithreading
Mem_slab supports allocation with timeout which blocks the context
if no slab is available. Updated to treat every timeout as K_NO_WAIT
when multithreading is disabled.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski dd0715c770 kernel: timer: Adding support to CONFIG_MULTITHREADING=n
Updated timer to not touch thread/scheduler code when multithreading
is off.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski 7dcff6ecfe kernel: Move _kernel from sched to init
_kernel struct can be used when multithreading is disabled.
In that case sched.c may not be compiled.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Krzysztof Chruscinski b8fb353cd4 kernel: Move k_busy_wait from thread to timeout
K_busy_wait is the only function from thread.c that is used when
CONFIG_MULTITHREADING=n. Moving to timeout since it fits better there
as it requires sys clock to be present.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2021-04-29 14:50:35 +02:00
Daniel Leung c8177ace3a kernel: work: handler null check is to NULL...
...instead of numeric zero.

Current usage is violation of MISRA rule 11.9.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 07:16:37 -04:00
Daniel Leung 0773441422 kernel: device: return NULL for pointer type
Return NULL instead of return numeric zero for pointer type.

Current usage violates MISRA rule 11.9.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 07:16:37 -04:00
Daniel Leung abfe045fd3 kernel: userspace: rename obj_list in struct dyn_obj
This renames the obj_list element in struct dyn_obj to
dobj_list, to avoid identifier collision with the static
obj_list defined in userspace.c.

Violation of MISRA rule 5.9.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-29 07:16:11 -04:00
Jennifer Williams 9aa0f212ae kernel: work: fix missing final else
work_queue_main() was missing final else statement
in the if else if construct. This commit adds else {}
to comply with coding guideline 15.7. Includes a
context-specific description of why this branch is empty.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams dc11ffb562 kernel: timeout: fix missing final else
z_timeout_end_calc() was missing final else statement
in the if else if construct. This commit pulls the last
condition into a final else {} to comply with guideline
15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Jennifer Williams c00bdcf1a8 kernel: poll: fix missing final else
register_events() and signal_poll_event() missing final
else statement in the if else if construct. This commit adds
else {} to comply with coding guideline 15.7.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-27 17:31:59 -04:00
Gerard Marull-Paretas bfce935caf power: remove device_pm_control_nop function
Devices that do not require PM should just use NULL.
`device_pm_control_nop` is still kept as an alias to NULL untill all
in-tree usage is replaced with NULL.

Code relying on device_pm_control function now returns -ENOTSUP
(equivalent to calling device_pm_control_nop).

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-27 16:28:49 -04:00
Daniel Leung 1117169980 kernel: generate placeholders for kobj tables before final build
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-27 13:32:00 -04:00
Peter Bigot 707dc22fb0 kernel: fix error in synchronous work cancellation return value
The return value is documented to be true if the work was pending, but
the implementation returned true only if the work was actually running
(i.e. the caller had to wait).  It should also return true if
scheduled or submitted work was cancelled.

Note that this means the return value cannot be used to determine
whether the call slept.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-04-27 13:28:45 -04:00
Nicolas Pitre f97d12936e kernel: add an architecture specific structs header
Add the ability to define architecture specific structures, notably
the ability to extend struct _cpu with per-CPU arch-specific stuff that
can be accessed with _current_cpu->arch.* similarly to _current->arch.*
for per-thead architecture data.

This is opt-in for architectures that want to benefit from this,
otherwise empty defaults are provided. A placeholder for ARM64 is
included to show the pattern.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-21 09:03:47 -04:00
Peter Bigot f69a38162a kernel: atomic: consistently use named type for atomic pointer values
There's a typedef for non-pointer values compatible with atomic
non-pointer objects.  Add a similar typedef for pointer values, and
the corresponding macro for initializing atomic pointer types.

This also will simplify replacing the Zephyr atomic API with one
based on C11 atomics, should that be desirable.  C11 atomic pointer
values are not void*.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-04-19 15:22:13 +02:00
Nick Graves b445f13462 kernel: Allow k_poll on message queues
This commit adds the ability to use a message queue as a
k_poll object. It follows the same pattern as polling on
FIFOs.

This change has been proven in practice at Samsara.

Fixes: #26728

Signed-off-by: Nick Graves <nicholas.graves@samsara.com>
2021-04-17 07:47:26 -04:00
Nicolas Pitre 2bed37e534 mem_slab: move global lock to per slab lock
This avoids contention between unrelated slabs and allows for
userspace accessible slabs when located in memory partitions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-14 14:20:19 -04:00
Flavio Ceolin f6f951cc17 kernel: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Carlo Caione 64dfa69681 aarch64: Remove useless _curr_cpu struct
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:10:10 -04:00
Daniel Leung 09e8db3d68 kernel: enable using timing subsys to collect paging histograms
This adds bits to the paging timing histogram collection routines
so they can use timing functions to collect execution time data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung 1559712b22 timing: add kconfig CONFIG_TIMING_FUNCTIONS_NEED_AT_BOOT
This adds a new kconfig CONFIG_TIMING_FUNCTIONS_NEED_AT_BOOT so
that the timing subsystem can be initialized at boot, instead of
being #ifdef under thread runtime statistics. This will allow
other part of kernel and other subsystems to utilize the timing
functions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung 8eea5119d7 kernel: mmu: demand paging execution time histogram
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung ae86519819 kernel: mmu: collect more demand paging statistics
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.

Also extends this to gather per-thread demand paging
statistics.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Anas Nashif 3f4f3f6c43 kernel: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif 0630452890 x86: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif 25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif bbbc38ba8f kernel: Make both operands of operators of same essential type category
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Peter Bigot fed035231f kernel: work: fix schedule from running work
k_work_schedule() is supposed to be a no-op if the work item is
already scheduled or submitted: the previous schedule is left
unchanged.  The check incorrectly inhibited the schedule operation
when the work item was neither scheduled nor submitted, but was
running.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-29 12:27:36 -04:00
Anas Nashif d8f698703b kernel: idle/z_sched_prio_cmp: match implementation to prototype
The identifiers used in the declaration and definition of a function
shall be identical [MISRAC2012-RULE_8_3-b]

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-29 07:52:42 -04:00
Katsuhiro Suzuki 19db485737 kernel: arch: use ENOTSUP instead of ENOSYS in k_float_disable()
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Katsuhiro Suzuki 59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Anas Nashif b503be2d02 kernel: poll: rename reserved 'signal' symbol
This symbol is reserved and usage of reserved symbols violates the
coding guidelines. (MISRA 21.2)

NAME
       signal - ANSI C signal handling

SYNOPSIS
       #include <signal.h>

       sighandler_t signal(int signum, sighandler_t handler);

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-25 07:28:37 -04:00
Anas Nashif 669f7f74b8 kernel: rename reserved symbol 'remove'
This symbol is reserved and usage of reserved symbols violates the
coding guidelines. (MISRA 21.2)

NAME
	remove - remove a file or directory
SYNOPSIS
        #include <stdio.h>
        int remove(const char *pathname);

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-25 07:28:37 -04:00
Anas Nashif 068e0872d7 kernel: remove EXPERIMENTAL from some Kconfigs
both thread monitor and thread names are not EXPERIMENTAL any more. They
have been used across the tree and lots depend on those features
already.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-23 13:01:08 +01:00
Kumar Gala e3285d5f24 kernel: Remove duplicate include of kswap.h
kswap.h was included twice.  Remove the duplication

Fixes #33524

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-22 13:04:05 -04:00
Shihao Shen 6525975a0e kernel: pipes: remove simple dead function k_pipe_block_put
Removed k_pipe_block_put and static functions only related to it.
After all the old usage of k_mem_block has been replaced by k_heap,
k_pipe_block_put still taking a deprecated k_mem_block as argument
becomes dead code. All APIs that hook it from kernel.h have been
confirmed to be removed. Since an asynchronous message descriptor
is only allocated in k_pipe_block_put, static functions for pipe_
async are removed as well.

Signed-off-by: Shihao Shen <shihao.shen@intel.com>
2021-03-22 07:20:06 -04:00
Anas Nashif c076d94eec kernel: remove tickless idle
This feature predated the tickless kernel and has been in legacy mode
for a while. We now have no drivers or systems that do not support
tickless, so remove this option and cleanup the code to only use
tickless.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif a518f48796 clock: renmae z_timeout_end_calc -> sys_clock_timeout_end_calc
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif fe0872c0ab clocks: rename z_tick_get -> sys_clock_tick_get
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif 5c90ceb105 clock: rename z_tick_get_32 -> sys_clock_tick_get_32
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif a387221f3c clock: rename z_clock_hw_cycles_per_sec_runtime_get
Do not use z_ for internal APIs, z_ is for private APIs within one
subsystem only.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Anas Nashif 9c1efe6b4b clock: remove z_ from semi-public APIs
The clock/timer APIs are not application facing APIs, however, similar
to arch_ and a few other APIs they are available to implement drivers
and add support for new hardware and are documented and available to be
used outside of the clock/kernel subsystems.

Remove the leading z_ and provide them as clock_* APIs for someone
writing a new timer driver to use.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-19 11:22:17 -04:00
Kumar Gala 7d35a8c93d kernel: remove arch_mem_domain_destroy
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed.  So remove
arch_mem_domain_destroy as well.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-18 16:30:47 +01:00
Kumar Gala 3a6598054a kernel: remove deprecated mem domain APIs
Remove k_mem_domain_destroy and k_mem_domain_remove_thread as they've
been deprecated for at least 2 releases now.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-17 13:49:36 -05:00
Andrzej Głąbek 6de16d0013 kernel: Add missing verification for device_usable_check() system call
so that this function and also device_is_ready() can be called from
user mode.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2021-03-15 10:45:20 -05:00
Enjia Mai 4aed856d7f kernel: smp: Remove unused internal API z_smp_reacquire_global_lock()
The internal function z_smp_reacquire_global_lock() has not used by
anywhere inside zephyr code, so remove it.

Fixes #33273.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2021-03-14 18:32:26 -04:00
Peter Bigot b29abe3710 device: add API to visit required devices
The static device dependencies from devicetree are not the only ones
that might be present at runtime.  Add API that allows visiting
required devices without assuming that handles for or pointers to them
can be accessed as a static contiguous sequence.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-11 08:53:18 -05:00
Lauren Murphy d88ce65463 kernel/sched: only send IPI to abort thread if hardware supports it
Wrap arch_sched_ipi() call in z_thread_abort() with ifdef checking for
hardware support of IPI.

Fixes #32723

Signed-off-by: Lauren Murphy <lauren.murphy@intel.com>
2021-03-10 14:27:33 -05:00
James Harris 33c9be90cc kernel: fix TOCTTOU issue in k_thread_name_set
Previously, a racing write to the provided string could result
in up to CONFIG_THREAD_MAX_NAME_LEN-2 bytes after the end
of user-accessible memory being leaked into the thread name.

For now, make a temporary copy. In an ideal world this could
copy directly from userspace into the thread name, but that
violates the current vrfy / impl split.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-08 19:27:23 -05:00
Andy Ross 820c94e5dd arch/xtensa: Inline atomics
The xtensa atomics layer was written with hand-coded assembly that had
to be called as functions.  That's needlessly slow, given that the low
level primitives are a two-instruction sequence.  Ideally the compiler
should see this as an inline to permit it to better optimize around
the needed barriers.

There was also a bug with the atomic_cas function, which had a loop
internally instead of returning the old value synchronously on a
failed swap.  That's benign right now because our existing spin lock
does nothing but retry it in a tight loop anyway, but it's incorrect
per spec and would have caused a contention hang with more elaborate
algorithms (for example a spinlock with backoff semantics).

Remove the old implementation and replace with a much smaller inline C
one based on just two assembly primitives.

This patch also contains a little bit of refactoring to address the
scheme has been split out into a separate header for each, and the
ATOMIC_OPERATIONS_CUSTOM kconfig has been renamed to
ATOMIC_OPERATIONS_ARCH to better capture what it means.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Andy Ross deca2301f6 kernel/swap: Move arch_cohere_stacks() back under the lock
Commit 6b84ab3830 ("kernel/sched: Adjust locking in z_swap()") moved
the call to arch_cohere_stacks() out of the scheduler lock while doing
some reorgnizing.  On further reflection, this is incorrect.  When
done outside the lock, the two arch_cohere_stacks() calls will race
against each other.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Eric Johnson b4aeef4d5b kernel: timer: Fix incorrect behavior for timers with K_FOREVER period
Zephyr docs state that timers will act as one-shot timers when started
with a period of K_NO_WAIT or K_FOREVER. However the code adjusting
period was setting K_FOREVER timeout ticks to 1 which caused the timer
to expire every tick. This adds a check to not adjust K_FOREVER periods

Signed-off-by: Eric Johnson <eric@liveathos.com>
2021-03-07 08:00:08 -05:00
Flavio Ceolin 9b246aba78 power: Make pm_system_resume private
This API is not intended to be public and it is called only from the
idle thread.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-07 07:59:53 -05:00
Flavio Ceolin 2e9b583da9 idle: Remove weak function
pm_system_resume is always implemented when PM is enabled. There is no
need to have this weak function under an ifdef PM.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-07 07:59:53 -05:00
Flavio Ceolin 6307d19967 power: Remove unused / unimplemented code
pm_system_resume_from_deep_sleep is not implemented or used
anywhere. Just remove it and keep the code base cleaner.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-07 07:59:53 -05:00
Flavio Ceolin e2771340af power: Remove unnecessary pm_idle_exit_notification_disable api
This function is useless and the state variable that it was
controlling is also not necessary because the same logic is being
handled by the variable post_ops_done.\

This reasonably simplifies idle thread logic.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-07 07:59:53 -05:00
Flavio Ceolin b5e1336e83 power: s/POWER_STATE_ACTIVE/PM_STATE_ACTIVE
Fix some references to old power state names.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-07 07:59:53 -05:00
Flavio Ceolin 10f29359d7 power: Make pm_system_suspend private
pm_system_suspend is called only from the idle thread and should
not be exported as a public API.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-07 07:59:53 -05:00
James Harris 53b8179371 kernel: sem: handle resets with outstanding waiting threads
Previously, a k_sem_reset with any outstanding waiting threads would
result in the semaphore in an inconsistent state, with more threads
waiting in the wait_q than the count would indicate.

Explicitly -EAGAIN any waiting threads upon k_sem_reset, to
ensure safety here.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-06 07:39:43 -05:00
James Harris b10428163a kernel: sem: add K_SEM_MAX_LIMIT
Currently there is no way to distinguish between a caller
explicitly asking for a semaphore with a limit that
happens to be `UINT_MAX` and a semaphore that just
has a limit "as large as possible".

Add `K_SEM_MAX_LIMIT`, currently defined to `UINT_MAX`, and akin
to `K_FOREVER` versus just passing some very large wait time.

In addition, the `k_sem_*` APIs were type-confused, where
the internal data structure was `uint32_t`, but the APIs took
and returned `unsigned int`. This changes the underlying data
structure to also use `unsigned int`, as changing the APIs
would be a (potentially) breaking change.

These changes are backwards-compatible, but it is strongly suggested
to take a quick scan for `k_sem_init` and `K_SEM_DEFINE` calls with
`UINT_MAX` (or `UINT32_MAX`) and replace them with `K_SEM_MAX_LIMIT`
where appropriate.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-05 08:13:53 -06:00
Spoorthy Priya Yerabolu 4118ed1d4d kernel: sched: removing dead code
Due to the recent changes to scheduler z_find_first_thread_to_unpend
& z_remove_thread_from_ready_q are not used anymore. So removing the
dead code.

fixes: #32691

Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
2021-03-05 11:05:25 +03:00
Andy Ross 6400bb54d6 kernel/idle: Clean up and refactoring / remove TICKLESS_IDLE_THRESH
While I'm in the idle code, let's clean this loop up.  It was a really
bad #ifdef hell:

* Remove the CONFIG_TICKLESS_IDLE_THRESH logic (and the kconfig),
  which never did anything but needlessly increase latency.

* Move the needed timeout logic from the main loop into
  pm_save_idle(), which eliminates the special case for
  !SYS_CLOCK_EXISTS.

Behavior (modulo that one kconfig) should be completely unchanged, and
now the inner part of the idle loop looks like:

    while (true) {
        (void) arch_irq_lock();

        if (IS_ENABLED(CONFIG_PM)) {
            pm_save_idle();
        } else {
            k_cpu_idle();
        }
    }

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-04 14:31:12 -05:00
Andy Ross 39a8f3b4f9 kernel/idle: Replace stolen IRQ lock
The removal of the abort handling also absconded with an IRQ lock that
is required for reliable operation in the idle loop.  Put it back.

Once the idle loop has made a decision to enter idle, any interrupt
that arrives needs to be masked and delivered AFTER the system enters
idle.  Otherwise we run the risk of races where the system accepts and
processes an interrupt that should have prevented idle, but then goes
to sleep anyway having already made the decision.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-04 14:31:12 -05:00
Peter Bigot b706a5e999 kernel: remove old work queue implementation
Now that the old API has been reimplemented with the new API remove
the old implementation and its tests.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-03 20:06:00 -05:00
Peter Bigot d1affd9118 kernel: default to new work API implementation
Switch the default and clean up some test workarounds.  This will enable
final conversions necessary to transition to the new API.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-03 20:06:00 -05:00
Peter Bigot dc34e7c6f6 kernel: add new work queue implementation
This commit provides a complete reimplementation of the work queue
infrastructure intended to eliminate the race conditions and feature
gaps in the existing implementation.

Both bare and delayable work structures are supported.  Items can be
submitted; delayable items can be scheduled for submission at a future
time.  Items can be delayed, queued, and running all at the same time.
A running item can also be canceling.

The new implementation:
* replaces "pending" with "busy" which identifies the active states;
* supports canceling delayed and submitted items;
* prevents resubmission of a item being canceled until cancellation
  completes;
* supports waiting for cancellation to complete;
* supports flushing a work item (waiting for the last submission to
  complete without preventing resubmission);
* supports waiting for a queue to drain (only allows resubmission from
  the work thread);
* supports stopping a work queue in conjunction with draining it;
* prevents handler-reentrancy during resubmission.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-03 20:06:00 -05:00
Peter Bigot 44539ed645 kernel: select work queue implementation
Attempts to reimplement the existing work API using a new work
implementation failed, primarily due to heavy use of whitebox testing
in validating the original API.  Add a temporary Kconfig that will
select between the two implementations so we can use the same
identifiers but select which implementation they reference.

This commit just adds the selection infrastructure and uses it to
conditionalize the existing implementation in anticipation of the new
one in the next commit.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-03 20:06:00 -05:00
Peter Bigot 0259c864df kernel: add private scheduler APIs
These functions are a subset of proposed public APIs to clean up
several issues related to safely handling waking of threads.  They
have been made private as they interface may change, but their use
will simplify the reimplementation of the k_work functionality.

See: https://github.com/zephyrproject-rtos/zephyr/pull/29668

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-03 20:06:00 -05:00
James Harris c7bb423f3e kernel: fix race conditions with z_ready_thread
Several internal APIs wrote thread attributes (return value, mainly)
_after_ calling `z_ready_thread`. This is unsafe, at least in SMP,
because another core could have already picked up and run the thread.

Fixes #32800.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-03 13:54:47 -05:00
James Harris 6543e06914 kernel: sched: avoid unnecessary lock in z_impl_k_yield
`z_impl_k_yield` unlocked sched_spinlock, only to lock it again
immediately, do a little bit more work, then unlock it again.
This causes performance issues on SMP, where `sched_spinlock`
is often fairly highly contended and cores often end up spinning
for quite a while waiting to retake the lock in `z_swap_unlocked`.

Instead directly pass the spinlock key to `z_swap` and avoid the
extra lock+unlock.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-02 14:35:21 -05:00
James Harris 2cd0f66515 kernel: sched: change to 3-way thread priority comparison
`z_is_t1_higher_prio_than_t2` was being called twice in both the
context-switch fastpath and in `z_priq_rb_lessthan`, just to
dealing with priority ties. In addition, the API was error-prone
(and too much in the fastpath to be able to assert its invarients)
- see also #32710 for a previous example of this API breaking
and returning a>b but also b>a.

Replacing this with a direct 3-way comparison `z_cmp_t1_prio_with_t2`
sidesteps most of these issues. There is still a concern that
`sgn(z_cmp_t1_prio_with_t2(a,b)) != -sgn(z_cmp_t1_prio_with_t2(b,a))`
but I don't see any way to alleviate this aside from adding an
assert to the fastpath.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-02 14:27:14 -05:00
James Harris 3330ab12d8 kernel: fix yielding between tasks with same deadline
Previously two tasks with the same deadline and priority would
always have `z_is_t1_higher_prio_than_t2` `true` in both directions.

This is logically inconsistent, and results in `k_yield` not actually
yielding between identical threads.

Signed-off-by: James Harris <james.harris@intel.com>
2021-02-27 10:25:47 +01:00
Andy Ross 6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross c0c8cb0e97 kernel: Remove abort and join implementation (UNBISECTABLE)
THIS COMMIT DELIBERATELY BREAKS BISECTABILITY FOR EASE OF REVIEW.
SKIP IF YOU LAND HERE.

Remove the existing implementatoin of k_thread_abort(),
k_thread_join(), and the attendant facilities in the thread subsystem
and idle thread that support them.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross bf99f3105f kernel/timeout: Correctly clamp z_clock_set_timeout() argument
This function would correctly suppress attempts to set timeouts that
were too soon for the driver or farther out than what was already set,
but when it actually set the timeout it would use the requested value
and not clamp it to the minimum of it and the current timeout
expiration, leading to "too-long" timeouts being set at the driver.

In uniprocessor configurations, that turns out to have been benign
because something else would always come back along when timeout state
changed and fix the broken value before the expiration.

But in SMP, this opens up races.  For example, the idle thread on one
CPU can see that there are no active threads and schedule a maximum
value timeout at the same time as the other thread adds a new timeout
that expects a near-term expiration.  The broken code here would see
that the new timeout exists, decide that yes it needs to override, but
then set the K_TICKS_FOREVER value it got from the idle thread!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 419f37043b kernel/sched: Clamp minimum timeslice when TICKLESS
When the kernel is TICKLESS, timeouts are set as needed, and drivers
all have some minimum amount of time before which they can reliably
schedule an interrupt.  When this happens, drivers will kick the
requested interrupt out by one tick.  This means that it's not
reliably possible to get a timeout set for "one tick in the
future"[1].

And attempting to do that is dangerous anyway.  If the driver will
delay a one-tick interrupt, then code that repeatedly tries to
schedule an imminent interrupt may end up in a state where it is
constantly pushing the interrupt out into the future, and timer
interrupts stop arriving!  The timeout layer actually has protection
against this case.

Finally getting to the point: in recent changes, the timeslice layer
lost its integration with the "imminent" test in the timeout code, so
it's now able to run into this situation: very rapidly context
switching code (or rapidly arriving interrupts) will have the effect
of infinitely[2] delaying timeouts and stalling the whole timeout
subsystem.

Don't try to be fancy.  Just clamp timeslice duration such that a
slice is 2 ticks at minimum and we'll never hit the problem.  Adjust
the two tests that were explicitly requesting very short slice rates.

[1] Of course, the tradeoff is that the tick rate can be 100x higher
or more, so on balance tickless is a huge win.

[2] Actually it only lasts until a 31 bit signed rollover in the HPET
cycle count in practice.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross a202670c18 kernel/sched: Remove now-spurious SWAP_NONATOMIC workaround
Recent work to normalize use of the thread QUEUED state bit means that
we never attempt to remove unqueued threads from the low-level run
queue.  So the old workaround for SWAP_NONATOMIC that was trying to
detect this condition isn't necessary anymore.

Which is serendipitous, because it was written to encode some very
specific logic about the circumstances where _current could be
dequeued that I'd like to be able to break.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 05c468f594 kernel/sched: Make z_ready_thread() safe vs. already-running threads
This is part of the scheduler API, and was always just a synchronized
wrapper around the internal ready_thread() function.  But where the
internal users seem to be careful not to call it on threads that are
not known to be already queued or running, the general users in the
IPC code seem to be less strict.

Add a simple test to detect the case where a thread is already
running.  Right now this just loops over the array of CPUs, so is O(N)
in the CPU count even though N is never more than four for us
currently.  But this is possible without modifying data structures.  A
more scalable way to do this if we ever need to run on very parallel
systems would be to use another state bit for RUNNING, or to keep a
backpointer in the thread struct to the CPU it's running on, etc...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 6b84ab3830 kernel/sched: Adjust locking in z_swap()
Swap was originally written to use the scheduler lock just to select a
new thread, but it would be nice to be able to rely on scheduler
atomicity later in the process (in particular it would be nice if the
assignment to cpu.current could be seen atomically).  Rework the code
a bit so that swap takes the lock itself and holds it until just
before the call to arch_switch().

Note that the local interrupt mask has always been required to be held
across the swap, so extending the lock here has no effect on latency
at all on uniprocessor setups, and even on SMP only affects average
latency and not worst case.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 37866336f9 kernel/sched: Fix race between thread wakeup timeout and abort
Aborted threads will cancel their timeouts, but the timeout subsystem
isn't protected under the same lock so it's possible for a timeout to
fire just as a thread is being aborted and wake it up unexpectedly.
Check the state before blowing anything up.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 8e16012ab7 kernel/thread: Initialize pended_on field of struct thread_base
This got missed, leaving garbage there for restarted threads to trip
on.  Actually I see multiple uninitialized fields, which seems odd.
This code deserves some rework, thread initialization isn't a
performance path and we should probably be zeroing the struct out.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andrei Emeltchenko 377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Daniel Leung ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung ec21c0b92f kernel: mmu: fix boot address translation macros
The Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() address
translation macros are flipped in their calculations.
The calculation is supposed to be:

  virt = phys + ((KERNEL_VM_BASE + KERNEL_VM_OFFSET) -
                 SRAM_BASE_ADDRESS)

So fix the them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Anas Nashif ecdc770c9b kernel: thread: do not assert on tests
remove assert that prevented us from testing non-userspace code on
platforms that can do userspace.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-22 14:36:06 -05:00
Andy Ross 252764a4ba kernel/timeout: Fix off-by-one in absolute timeouts
The computation was using the already-adjusted input value that
assumed relative timeouts and not the actual argument the user passed.
Absolute timeouts were consistently waking up one tick early.

Fixes #32499

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-22 08:45:24 -05:00
Peter Bigot d554d34137 device: add post-process of elf file to manage device handles
Following the idiom used for system calls, add script support to read
the initial application binary to identify which devices are defined,
and to use their offset in the device array as their unique handle
rather than the externally-defined ordinal from devicetree.  The
device dependency arrays are updated to use these handles.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 15:46:16 -05:00
Peter Bigot d1a0568e11 device: store device pm busy status in the state structure
Move the busy status from a global atomic bit sequence to atomic flags
in the device PM state.  While this temporarily adds 4 bytes to each
PM structure the whole device PM infrastructure will be refactored and
it's likely the extra memory can be recovered.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Peter Bigot 65eee5cb47 device: store initialization status in the state structure
Separate the state indicator of whether the initialization function
has been invoked from the success or failure of the initialization.
This allows precise confirmation that the device is ready (i.e. it has
been initialized, and that initialization succeeded).

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Peter Bigot 8d771f1d8e device: move device power management state into common dynamic state
This avoids the need for distinct object that uses flash to store its
initializer.  Instead the state is initialized when the kernel is
starting up, before anything can reference it.  In future refactoring
the PM state could be accessed directly without storing an extra
pointer in the static device state.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Peter Bigot 1cadd8b305 device: perform dynamic device initialization during system startup
Initialize all device objects in a batch before invoking any code that
might try to reference data in them.  This eliminates a race condition
enabled by the ability to resolve a device structure at build time,
and reference it from one device's initialization routine before the
device itself has been initialized.

While the device is pulled from the sys_init records rather than
static devices, all in-tree init_entry records that are associated
with devices are produced via Z_DEVICE_DEFINE(), so there should be no
static devices that would be missed by instead iterating over the
device records.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Peter Bigot 5b36a01a67 device: binding lookup should return null for unsupported names
A null device name should map to a null device.  So should a name that
is empty.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-16 14:39:53 -06:00
Andy Ross c7d0cb6641 include/kernel_arch_interface.h: Redocument arch_switch()
Some recent changes exposed some common "arch_switch() anti-patterns"
in various architectures.  The documentation technically described
this all correctly, but probably wasn't as clear as it should have
been.  Rewrite, making clear exactly what needs to happen and how the
fields should be interpreted.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross 4ff457113e kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.

Example:
   * CPU0 is idle, CPU1 is running thread A
   * CPU1 makes high priority thread B runnable
   * CPU1 reaches a schedule point (or returns from an interrupt) and
     decides to run thread B instead
   * CPU0 simultaneously takes its IPI and returns, selecting thread A

Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable.  So we have a deadlock, each CPU is spinning waiting for the
other!

Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation.  The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears.  I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly.  In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.

The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.

Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross 91946ef21c kernel/sched: Refactor, unify management of QUEUED state
The QUEUED state flag was managed separately from the run queue
insertion/deletion, and the logic (while AFAICT perfectly correct) was
tangled in a few places trying to keep them in sync.  Put the
management of both behind a queue_thread()/dequeue_thread() API for
clarity.  The ALWAYS_INLINE usage seems to be working to get the
compiler to condense the resulting multiple assignments.  No behavior
change.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross dd43221540 kernel/sched: Fix race with switch handle
The "null out the switch handle and put it back" code in the swap
implementation is a holdover from some defensive coding (not wanting
to break the case where we picked our current thread), but it hides a
subtle SMP race: when that field goes NULL, another CPU that may have
selected that thread (which is to say, our current thread) as its next
to run will be spinning on that to detect when the field goes
non-NULL.  So it will get the signal to move on when we revert the
value, when clearly we are still running on the stack!

In practice this was found on x86 which poisons the switch context
such that it crashes instantly.

Instead, be firm about state and always set the switch handle of a
currently running thread to NULL immediately before it starts running:
right before entering arch_switch() and symmetrically on the interrupt
exit path.

Fixes #28105

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross 1ba7414029 kernel/sched: Correct coherence assert
Some legacy spots in our IPC layer (legally) pass a NULL wait queue to
pend().  Allow this in the coherence assertion.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross 4dc6a0b89b kernel/poll: Remove dummy waitq from stack
The poll code uses a dummy wait queue so the threads have something to
block on, but the previous coherence pass (which rearranged things to
put the _poller data elsewhere) missed that this was on the stack,
which is not allowed.  It actually has no use except as a list, so
make it a global static instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross 1d51e888d8 kernel/z_swap: Remove on-stack dummy spinlock
The z_swap_unlocked() function used a dummy spinlock for simplicity.
But this runs afouls of checking for stack-resident spinlocks
(forbidden when KERNEL_COHERENCE is set).  And it's executing needless
code to release the lock anyway.  Replace with a compile time NULL,
which will improve performance, correctness and code size.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross 604f0f44b6 kernel/sched: Add missing lock around waitq unpend calls
The two calls to unpend a thread from a wait queue were inexplicably*
unsynchronized, as James Harris discovered.  Rework them to call the
lowest level primities so we can wrap the process inside the scheduler
lock.

Fixes #32136

* I took a brief look.  What seems to have happened here is that these
  were originally synchronized via an implicit from an outer caller
  (remember the original Uniprocessor irq_lock() API is a recursive
  lock), and they were mostly implemented in terms of middle-level
  calls that were themselves locked.  So those got ported over to the
  newer spinlock but the outer wrapper layer got forgotten.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-10 07:43:18 -05:00
Daniel Leung 371752bce3 kernel: tls: align tdata/tbss sections in stack
This lets the linker tell us what kind of alignment is required
for both tdata and tbss data when copying them into stack.
If they are not aligned as expected by the toolchain, generated
code would be accessing incorrect location for thread variables.

Fixes #32015

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-07 23:28:43 -05:00
Nicolas Pitre f9461d1ac4 mmu: fix ARM64 compilation by removing z_mapped_size usage
The linker script defines `z_mapped_size` as follows:

```
	z_mapped_size = z_mapped_end - z_mapped_start;
```

This is done with the belief that precomputed values at link time will
make the code smaller and faster.

On Aarch64, symbol values are relocated and loaded relative to the PC
as those are normally meant to be memory addresses.

Now if you have e.g. `CONFIG_SRAM_BASE_ADDRESS=0x2000000000` then
`z_mapped_size` might still have a reasonable value, say 0x59334.
But, when interpreted as an address, that's very very far from the PC
whose value is in the neighborhood of 0x2000000000. That overflows the
4GB relocation range:

```
kernel/libkernel.a(mmu.c.obj): in function `z_mem_manage_init':
kernel/mmu.c:527:(.text.z_mem_manage_init+0x1c):
relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21
```

The solution is to define `Z_KERNEL_VIRT_SIZE` in terms of
`z_mapped_end - z_mapped_start` at the source code level. Given this
is used within loops that already start with `z_mapped_start` anyway,
the compiler is smart enough to combine the two occurrences and
dispense with a size counter, making the code effectively
slightly better for all while avoiding the Aarch64 relocation
overflow:

```
   text    data     bss     dec     hex filename
   1216       8  294936  296160   484e0 mmu.c.obj.arm64.before
   1212       8  294936  296156   484dc mmu.c.obj.arm64.after
   1110       8    9244   10362    287a mmu.c.obj.x86-64.before
   1106       8    9244   10358    2876 mmu.c.obj.x86-64.after
```

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-05 17:19:56 -05:00
Carlo Caione 302a36a115 kernel: mmu: Fix trivial typos
Otherwise the memory scheme is confusing to read.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-04 14:00:36 -05:00
Martin Åberg 612dad264c kernel: Decouple TICKS_PER_SEC from TICKLESS_CAPABLE
The SYS_CLOCK_TICKS_PER_SEC default may depend on the kernel config
for tickless, rather than the capability.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-02-04 12:34:23 -05:00
Ioannis Glaropoulos 40aab3276c Revert "kernel: init: activate FPU for main thread"
Activating K_FP_REGS flags introduces stack memory
overhead for the main thread in Cortex-M architecture.
Several ARM platforms experience main thread stack
overflows when building with FPU_SHARING=y.
Enabling FPU sharing in main thread should not be
the default configuration. Users are welcome to
enable FP sharing on the main thread in the
application code, in main().

This reverts commit 8453a73ede.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-03 17:22:50 -05:00
Anas Nashif 39f632e7f0 kernel: fix usage of KERNEL_COHERENCE macro
Add missing CONFIG_ to KERNEL_COHERENCE usage in code.

Fixes #30380

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-03 10:42:04 -05:00
Daniel Leung d1495e98e2 kernel: fix arch_mem_coherent() call in spinlock
The call to arch_mem_coherent() inside spinlock.h
when spinlock validation and memory coherence enabled
is causing build error as spinlock.h does not include
kernel_arch_func.h directly. However, simply including
that file does not work either as this creates
the chicken-or-egg in the chain of include files.
In order to make spin validation work with kernel
coherence enabled, a separate function is created
to break the circular dependencies of include files.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-03 10:42:04 -05:00
Daniel Leung 079bc64c16 kernel: fix _kernel argument to arch_mem_coherent
Argument to arch_mem_coherent() is a pointer so pass
a pointer to _kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-03 10:42:04 -05:00
Andy Ross 887e1abace kernel/timeout: Fix timeout "sooner" computation
There was an edge case in the timeout handling (exposed by, but not
strictly related to, the recent timeslice fix): the next_timeout()
computation would include time slice expiration as a clamp on the
result, but this would be invoked also on the z_set_timeout_expiry()
path which gets hooked on entry to a new thread which is needed to set
the timeout in the first place.  So if no other timer interrupt was
scheduled, it was possible to miss the first timeslice interrupt after
thread scheduling.

The explanation is much longer than the fix (use <= as the comparator
instead of <).

In practice this was only being hit in the existing test suite on
riscv miv running under renode using non-default clock rates.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-02 17:58:40 -05:00
Andy Ross 544475d8a7 kernel/timeout: Schedule zero-time timeouts
Fix an edge case that snuck in with the recent fix: if timeslicing is
enabled, the CPU's slice_ticks will be zero, and thus match a timeout
object's dticks value of zero, and thus get suppressed (because "we
already have a timeout scheduled for that") incorrectly.

Fixes #31789

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-02 17:58:40 -05:00
Alexandre Bourdiol 8925af94f2 kernel: Kconfig: increase test default MAIN_STACK_SIZE for ARM Cortex M
There are more and more tests that fail due to stackoverflow.
Increasing MAIN_STACK_SIZE to fix those issues.

Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
2021-02-02 10:05:46 -05:00
Flavio Ceolin 148769c715 sched: timeout: Do not miss slice timeouts
Time slices don't have a timeout struct associated and stored in
timeout_list. Time slice timeout is direct programmed in the system
clock and tracked in _current_cpu->slice_ticks.

There is one issue where the time slice timeout can be missed because
the system clock is re-programmed to a longer timeout. To this happens,
it is only necessary that the timeout_list is empty (any timeout set)
and a new timeout longer than remaining time slice is set. This is cause
because z_add_timeout does not check for the slice ticks.

The following example spots the issue:

K_THREAD_STACK_DEFINE(tstack, STACK_SIZE);
K_THREAD_STACK_ARRAY_DEFINE(tstacks, NUM_THREAD, STACK_SIZE);
K_SEM_DEFINE(sema, 0, NUM_THREAD);

static inline void spin_for_ms(int ms)
{
	uint32_t t32 = k_uptime_get_32();

	while (k_uptime_get_32() - t32 < ms) {
	}
}

static void thread_time_slice(void *p1, void *p2, void *p3)
{
	printk("thread[%d] - Before spin\n", (int)(uintptr_t)p1);

	/* Spinning for longer than slice */
	spin_for_ms(SLICE_SIZE + 20);

	/* The following print should not happen before another
	 * same priority thread starts.
	 */
	printk("thread[%d] - After spinning\n", (int)(uintptr_t)p1);
	k_sem_give(&sema);
}

void main(void)
{
	k_tid_t tid[NUM_THREAD];
	struct k_thread t[NUM_THREAD];
	uint32_t slice_ticks = k_ms_to_ticks_ceil32(SLICE_SIZE);
	int old_prio = k_thread_priority_get(k_current_get());

	/* disable timeslice */
	k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));

	for (int j = 0; j < 2; j++) {
		k_sem_reset(&sema);

		/* update priority for current thread */
		k_thread_priority_set(k_current_get(), K_PRIO_PREEMPT(j));

		/* synchronize to tick boundary */
		k_usleep(1);

		/* create delayed threads with equal preemptive priority */
		for (int i = 0; i < NUM_THREAD; i++) {
			tid[i] = k_thread_create(&t[i], tstacks[i], STACK_SIZE,
						 thread_time_slice, (void *)i, NULL,
						 NULL, K_PRIO_PREEMPT(j), 0,
						 K_NO_WAIT);
		}

		/* enable time slice (and reset the counter!) */
		k_sched_time_slice_set(SLICE_SIZE, K_PRIO_PREEMPT(0));

		/* Spins for while to spend this thread time but not longer */
		/* than a slice. This is important  */
		spin_for_ms(100);

		printk("before sleep\n");
		/* relinquish CPU and wait for each thread to complete */
		k_sleep(K_TICKS(slice_ticks * (NUM_THREAD + 1)));

		for (int i = 0; i < NUM_THREAD; i++) {
			k_sem_take(&sema, K_FOREVER);
		}

		/* test case teardown */
		for (int i = 0; i < NUM_THREAD; i++) {
			k_thread_abort(tid[i]);
		}
		/* disable time slice */
		k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));
	}
	k_thread_priority_set(k_current_get(), old_prio);
}

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-27 16:55:58 -05:00
Andrew Boie 14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 6c97ab3167 mmu: promote public APIs
These are application facing and are prefixed with k_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie c7be5dddda mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 60d306642e kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 611b626b39 mmu: pin the whole kernel
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie a5cb878144 kernel: add demand paging implementation
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 431b7c0fe5 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie a6eca9fab6 kernel: add demand paging arch interfaces
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie ecb25fec51 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 299a2cf62e mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 5db615bb38 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 8ccec8eba6 kernel: add k_mem_map() interface
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie e35f179db3 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 73a3e05e40 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Peter Bigot affa7a1c7e Revert "device: add post-process of elf file to manage device handles"
This reverts commit 40d3653758.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-01-23 18:01:03 -05:00
Nicolas Pitre a2011d8af9 z_heap_aligned_alloc(): avoid memory wastage
The strategy used in z_heap_aligned_alloc() was to allocate an extra
align-sized memory block for storing a pointer to the memory heap.
This is wasteful in terms of memory usage when alignment is larger
than a pointer width. A loop is needed to find the initial memory
start when freeing it which isn't optimal either.

Instead, let's have sys_heap_aligned_alloc() rewind a pointer after
it is aligned to make just enough room for storing our heap reference.
This way the heap reference is always located immediately before the
aligned memory and any unused memory is returned to the heap.

The rewind and alignment values may coincide in which case only
the alignment is necessary anyway.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-01-22 10:04:43 -05:00
Flavio Ceolin d21cfd5f36 power: Remove power management conditionals from code
Remove conditionals (PM_DEEP_SLEEP_STATES and PM_SLEEP_STATES) from
power management code. Now these features are always available when
power management is enabled.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-22 09:31:20 -05:00
Flavio Ceolin 579f7049c7 power: Move pm subsystem to new power states
Migrate the whole pm subsystem to use new power states information
from power_state.h and get states and residency properties from
device tree.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-22 09:31:20 -05:00
Peter Bigot 0ab314f705 kernel: const-qualify objects used to calculate delay values
The internal API to measure time until a delay expires does not modify
the referenced timeout.  Make the functions that call it take pointers
to const objects, so that they can be used with pointer to
const-qualified containers.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-01-22 08:05:26 -06:00
Anas Nashif db0732f11d Revert "kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES"
This reverts commit 9d2ebfff58.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 8e84eaf73e Revert "kernel: add page frame management"
This reverts commit 2ca5fb7e06.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 0417b97257 Revert "kernel: add k_mem_map() interface"
This reverts commit 69d39af5e6.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 6b82664a5a Revert "mmu: add k_mem_free_get()"
This reverts commit 9111ec2c19.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif a2ec139bf7 Revert "mmu: arch_mem_map() may no longer fail"
This reverts commit db56722729.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif d887e078f9 Revert "mmu: ensure gperf data is mapped"
This reverts commit e9bfd64110.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 65122b776a Revert "kernel: add demand paging arch interfaces"
This reverts commit b8ae437967.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif cd0beca292 Revert "kernel: add demand paging internal interfaces"
This reverts commit 3e51a7a775.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 752934b3c8 Revert "kernel: add demand paging implementation"
This reverts commit 2fe1fc53c8.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 75ebe4c7bd Revert "mmu: pin the whole kernel"
This reverts commit a45486e1d5.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif c2c87c99c7 Revert "kernel: add z_num_pagefaults_get()"
This reverts commit d7e6bc3e84.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 5e978d237c Revert "mmu: backing stores reserve page fault room"
This reverts commit 7a642f81ab.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif ef17f889dc Revert "mmu: promote public APIs"
This reverts commit 63fc93e21f.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Daniel Leung d3218ca515 debug: coredump: remove z_ prefix for stuff used outside subsys
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-21 22:08:59 -05:00
Flavio Ceolin d6d4a832a4 kernel: build: Make TICKLESS_KERNEL depends on TICKLESS_CAPABLE
Tickless kernel option depends on tickless capable being selected by
the target.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-21 17:20:32 -05:00
Andrew Boie 63fc93e21f mmu: promote public APIs
These are application facing and are prefixed with k_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 7a642f81ab mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie d7e6bc3e84 kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie a45486e1d5 mmu: pin the whole kernel
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 2fe1fc53c8 kernel: add demand paging implementation
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 3e51a7a775 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie b8ae437967 kernel: add demand paging arch interfaces
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie e9bfd64110 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie db56722729 mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 9111ec2c19 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 69d39af5e6 kernel: add k_mem_map() interface
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 2ca5fb7e06 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 9d2ebfff58 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Peter Bigot 40d3653758 device: add post-process of elf file to manage device handles
Following the idiom used for system calls, add script support to read
the initial application binary to identify which devices are defined,
and to use their offset in the device array as their unique handle
rather than the externally-defined ordinal from devicetree.  The
device dependency arrays are updated to use these handles.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-01-21 14:49:04 -06:00
Carlo Caione e77c841023 cache: Expand the APIs for cache flushing
The only two supported operations for data caches in the cache framework
are currently arch_dcache_flush() and arch_dcache_invd().

This is quite restrictive because for some architectures we also want to
control i-cache and in general we want a finer control over what can be
flushed, invalidated or cleaned. To address these needs this patch
expands the set of operations that can be performed on data and
instruction caches, adding hooks for the operations on the whole cache,
a specific level or a specific address range.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Anas Nashif 989ebf6c35 kernel: add vrfy hooks to support userspace with condvar
Add needed vrfy hooks for userspace support.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-19 08:55:47 -05:00
Anas Nashif 06eb489c45 kernel: add condition variables
Introduce condition variables similar to how they are done in POSIX with
a mutex.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-19 08:55:47 -05:00
Ningx Zhao ff7ec0ce40 kernel: poll: remove unreachable code
register_event always returns 0, so the conditional will
always take the first branch and code in the else part
is never reached.

Fixes #31282

Signed-off-by: Ningx Zhao <ningx.zhao@intel.com>
2021-01-18 11:02:59 -05:00
Enjia Mai 53ca709828 tests: coverage: exclude the CODE UNREACHABLE of code coverage
1. Exclude the CODE UNREACHABLE line while generating coverage report.
2. Exclude the memory domain deprecated API when calculating code
coverage.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2021-01-15 12:42:00 -05:00
Nicolas Pitre 7a22a4bdf6 heap: clean up some size related issues
First, the maximum heap size must fit in 31 bits worth of chunks
because the internal 32-bit field holding the size is shared with
the `used` bit.

Then the mention of a 256-byte block in the doc is no longer
relevant. That pertained to the previous allocator implementation.

And ditto for the HEAP_MEM_POOL_MIN_SIZE kconfig option.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-01-15 12:08:20 -05:00
Andy Ross ef626571b2 kernel/sched: Optimize deadline comparison
Needing to check the current cycle time (which involves a spinlock and
register read on most architectures) is wasteful in the scheduler
priority predicate, which is a hot path.  If we "burn" one bit of
precision (and document the rule), we can do the comparison without
knowing the current time.

2^31 cycles is still far longer than a live deadline thread in any
legitimate realtime app should ever live before being scheduled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-01-15 11:35:50 -05:00
Maureen Helm f63385204c linker: arm: Add cortex_m itcm section
Adds a linker section for Cortex-M instruction tightly coupled memory
(ITCM), similar to the existing section for DTCM. A new executable MPU
region is not added as there isn't currently a need to make this section
accessible to user mode. This section can be enabled by setting a device
tree chosen node zephyr,itcm.

Signed-off-by: Maureen Helm <maureen.helm@nxp.com>
2021-01-15 14:51:20 +01:00
Andy Ross e956639dd6 kernel: Remove CONFIG_LEGACY_TIMEOUT_API
This was a fallback for an API change several versions ago.  It's time
for it to go.

Fixes: #30893

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-01-14 21:33:16 -05:00
Daniel Leung fe477ea6d3 kernel: userspace: aligned memory allocation for dynamic objects
This allows allocating dynamic kernel objects with memory alignment
requirements. The first candidate is for thread objects where,
on some architectures, it must be aligned for saving/restoring
registers.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-13 09:43:55 -08:00
Daniel Leung 0c9f9691c4 kernel: mempool: add z_thread_aligned_alloc
This adds a new z_thread_aligned_alloc() to do memory allocation
with required alignment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-13 09:43:55 -08:00
Enjia Mai 8d5a22c3c1 tests: enable the code coverage report for qemu_x86_64
Enable the code coverage report for qemu_x86_64 platform.
See issue #17991 please.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2021-01-05 10:32:38 -08:00
Peter Bigot 19cb44bab7 kernel: idle: fix builds with PM but no system clock
PM depends on SYS_CLOCK_EXISTS in Kconfig but several boards have
Kconfig overrides that allow the dependency to be ignored, so
CONFIG_PM=y even though CONFIG_SYS_CLOCK_EXISTS=n.  Fix the code so
that the true dependency is reflected in the generated code.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2020-12-27 18:18:52 +01:00
Christopher Friedt 135ffaff74 kernel/k_malloc: add k_aligned_alloc
This change adds z_heap_aligned_alloc() and k_aligned_alloc()
and changes z_heap_malloc() and k_malloc() to be small wrappers around
the aligned variants.

Fixes #29519

Signed-off-by: Christopher Friedt <chrisfriedt@gmail.com>
2020-12-27 18:17:07 +01:00
Marcin Niestroj 11cb1cf336 kernel: sched: fix legacy timeout calculation in z_tick_sleep
Ticks should be assigned directly to timeout value in case of
CONFIG_LEGACY_TIMEOUT_API=y, just as they were before referenced patch.

Fixes: 7a815d5d99 ("kernel: sched: Use k_ticks_t in z_tick_sleep")
Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com>
2020-12-18 14:03:25 -05:00
Andrew Boie d2ad783a97 mmu: rename z_mem_map to z_phys_map
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.

mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.

Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-16 08:55:55 -05:00
Daniel Leung 8ad5ad2214 kernel: don't unlock and then lock immediately in idle loop
Inside the idle loop, in some configuration, IRQ is unlocked and
then immediately locked again. There is a side effect:

1. IRQ is unlocked in middle of the loop.
2. Another thread (A) can now run so idle thread is un-scheduled.
3. Thread A runs to its end and going through the thread
   self-abort path.
4. Idle thread is rescheduled again, and continues to run
   the remaining loop when it eventuall calls k_cpu_idle().
   The "pending abort" path is not being executed on thread A
   at this point.
5. Now, thread A is suspended, and the CPU is in idle waiting
   for interrupts (e.g. timeouts).
6. Thread B is waiting to join on thread A. Since thread A has
   not been terminated yet so thread B is waiting until
   the idle thread runs again and starts executing from
   the beginning of while loop.
7. Depending on how many threads are running and how active
   the platform is, idle thread may not run again for a while,
   resulting in thread B appearing to be stuck.

To avoid this situation, the unlock/lock pair in middle of
the loop is removed so no rescheduling can be done mid-loop.
When there is no thread abort pending, it simply locks IRQ
and calls k_cpu_idle(). This is almost identical to the idle
loop before the thread abort code was introduced (except
the check for cpu->pending_abort).

Fixes #30573

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-12-15 08:27:27 -05:00
Anas Nashif 3d33d767f1 twister: rename in code
Some code had reference to sanitycheck, replace with twister.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-11 14:13:02 -05:00
Enjia Mai 00d7f2cfb6 kernel: thread: make offload_sem visible for releasing it outside
In order to release irq_offload semaphore outside kernel/thread.c, we
make it visible by modifying it non-static under ztest. This would be
needed such as when call irq_offload() to enter interrupt context and
a fatal error happened, then you have to release it in your fatal
handler, or the irq_offload will still be locked and no longer be
using again.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2020-12-09 21:52:09 -05:00
Anas Nashif 98cef8376c power: cleanup kernel idle and power hooks
Cleanup code for power management and remove some duplication and
isolate power management code from the kernel code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif 8a97717c72 power: rename _pm_idle_exit_notification_disable
Rename API to be more consistent with guidelines.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif f9238d6370 power: set_kernel_idle_time_in_ticks -> _set_kernel_idle_time_in_ticks
Rename internal function.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif e3937453a6 power: rename _sys_suspend/_sys_resume
Be consistent in PM namespaces.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif e0f3833bf7 power: remove SYS_ and sys_ prefixes
Remove SYS_ and sys_ from all PM related functions and defines.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif dd931f93a2 power: standarize PM Kconfigs and cleanup
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE

and use PM_ as the prefix for all PM related Kconfigs

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif 87ddddae52 Revert "kernel: fix usage of KERNEL_COHERENCE macro"
This reverts commit 67c5e6b0c0.

This is causing build issues on some platforms. Revert for now.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-08 14:27:27 -05:00
Maximilian Bachmann 34d7c78f4f kernel: add k_heap_aligned_alloc
k_heap did not have an aligned alloc function, even though
this is supported by the internal sys_heap.

Signed-off-by: Maximilian Bachmann <m.bachmann@acontis.com>
2020-12-08 13:21:26 -05:00
Anas Nashif e1d42724e5 kernel: make KERNEL_COHERENCE depend on ARCH_HAS_COHERENCE
We can't enable KERNEL_COHERENCE is architecture does not support it.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-08 09:30:02 -05:00
Anas Nashif 67c5e6b0c0 kernel: fix usage of KERNEL_COHERENCE macro
Add missing CONFIG_ to KERNEL_COHERENCE usage in code.

Fixes #30380

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-08 09:30:02 -05:00