Commit graph

3196 commits

Author SHA1 Message Date
Andrew Boie
f5951cd88f kernel: syscall_handler: get rid of stdarg
We can just implement this as a macro and not needlessly
run afoul of MISRC-C rule 17.1

Fixes: #10012

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-25 13:42:03 -08:00
Andrew Boie
4ce652e4b2 userspace: remove APP_SHARED_MEM Kconfig
This is an integral part of userspace and cannot be used
on its own. Fold into the main userspace configuration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-23 07:43:55 -05:00
Andrew Boie
01100eadb8 kernel: add stack canary to libc partition
User mode needs to be able to read this value in
compiler generated function prologues/epilogues.

Special handling in init.c for arches that use
_data_copy. This happens before _Cstart() gets
called. We need to make sure that the compiler
stack canary checks in _data_copy itself do not
fail.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-22 18:50:43 -05:00
Andrew Boie
17ce822ed9 app_shmem: create generic libc partition
We need a generic name for the partition containing
essential C library globals. We're going to need to
add the stack canary guard to this area so user mode
can read it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-22 18:50:43 -05:00
Aurelien Jarno
992f29a1bc arch: make __ramfunc support transparent
Instead of having to enable ramfunc support manually, just make it
transparently available to users, keeping the MPU region disabled if not
used to not waste a MPU region. This however wastes 24 bytes of code
area when the MPU is disabled and 48 bytes when it is enabled, and
probably a dozen of CPU cycles during boot. I believe it is something
acceptable.

Note that when XIP is used, code is already in RAM, so the __ramfunc
keyword does nothing, but does not generate an error.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2019-02-22 11:36:50 -08:00
Aurelien Jarno
eb097bd095 arch: arm: mpu: get the __ramfunc region size from the linker
The linker file defines the __ramfunc_ram_size symbols to get the size
of the __ramfunc_ram section. Use that instead of computing the value at
runtime from the start and end symbols. This saves 16 bytes of code with
CONFIG_RAM_FUNCTION=y.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
2019-02-22 11:36:50 -08:00
qianfan Zhao
e1cc657941 arm: Placing the functions which holds __ramfunc into '.ramfunc'
Using __ramfunc to places a function in RAM instead of Flash.
Code that for example reprograms flash at runtime can't execute
from flash, in that case must placing code into RAM.

This commit create a new section named '.ramfunc' in link scripts,
all functions has __ramfunc keyword saved in thats sections and
will load from flash to sram after the system booted.

Fixes: #10253

Signed-off-by: qianfan Zhao <qianfanguijin@163.com>
2019-02-22 11:36:50 -08:00
Piotr Zięcik
c45961daae power: Rework OS <-> Application interface
This commit simplifies OS <-> Application interface controlling power
management. In the previous approach application-based PM required
overriding sys_suspend() and sys_resume() functions. As these functions
actually implemented power state change, in such case application
basically had to provide own implementation of all PM-related stuff,
which was not portable and hard to maintain.

This commit changes this scheme: The sys_suspend() and sys_resume()
are now system functions while the application could either use
built-in power management policies or provide its own. All details
of power mode switching are now handled by the OS.

Also, this commit cleans up the Kconfig options related to system-level
power management grouping them under common CONFIG_SYS_PM_ prefix.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-19 13:25:36 -05:00
Carlos Stuart
75f77db432 include: misc: util.h: Rename min/max to MIN/MAX
There are issues using lowercase min and max macros when compiling a C++
application with a third-party toolchain such as GNU ARM Embedded when
using some STL headers i.e. <chrono>.

This is because there are actual C++ functions called min and max
defined in some of the STL headers and these macros interfere with them.
By changing the macros to UPPERCASE, which is consistent with almost all
other pre-processor macros this naming conflict is avoided.

All files that use these macros have been updated.

Signed-off-by: Carlos Stuart <carlosstuart1970@gmail.com>
2019-02-14 22:16:03 -05:00
Andy Ross
86380483da kernel/work_q: Fix block-in-spinlock bug
Work queues are implemented in terms of k_queue objects which provide
their own synchronization.  In particular insertion is potentially
blocking and always acts as a reschedule point, which means that it
must not be called with spinlocks held.

Release the lock first, and do a little cleanup of the resulting
k_delayed_work_submit_to_queue() logic.

Fixes #13411

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-14 19:45:20 -05:00
Andrew Boie
2cfeba8507 x86: implement interrupt stack trampoline
Upon hard/soft irq or exception entry/exit, handle transitions
off or onto the trampoline stack, which is the only stack that
can be used on the kernel side when the shadow page table
is active. We swap page tables when on this stack.

Adjustments to page tables are now as follows:

- Any adjustments for stack memory access now are always done
  to the user page tables

- Any adjustments for memory domains are now always done to
  the user page tables

- With KPTI, resetting a page now clears the present bit

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Ioannis Glaropoulos
228702e6e1 kernel: minor syntax fix in Kconfig
Minor style (syntax) fix in the help text of symbol
config EXECUTION_BENCHMARKING.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-02-12 08:29:33 -06:00
Piotr Zięcik
9cc63e07e4 power: Fix naming of Kconfig options controlling deep sleep states
This commit changes the names of SYS_POWER_DEEP_SLEEP* Kconfig
options in order to match SYS_POWER_LOW_POWER_STATE* naming
scheme.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-12 07:46:32 -05:00
Piotr Zięcik
7a49356c77 power: Fix naming of Kconfig options controlling low power states
The SYS_POWER_LOW_POWER_STATE_SUPPORTED and SYS_POWER_LOW_POWER_STATE
suggests one low power state but these options control multiple
low power state. This commit uses plural in the names to indicate
that.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-12 07:46:32 -05:00
Andy Ross
1202810119 kernel/sched: _thread_priority_set needs to be sched_lock aware
This API doesn't use the normal thread priority comparison itself, so
doesn't get the magic that thread_base.prio provides.  If called when
another thread should be run, this would preempt the current thread
always, even if the scheduler lock was taken.

That was benign until recent spinlockifiation exposed it: a mutex in
the philosophers test run in preempt_only mode would swap away while
holding a spinlock (which used to work with irq locks) and fail later
with a "recursive" spinlock assert.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
8a3d57b6cc kernel/userspace: Spinlockification
This port is a little different.  Most subsystem synchronization uses
simple critical sections that can be replaced with global or
per-object spinlocks.  But the userspace code was heavily exploiting
the fact that irq_lock was recursive and could be taken at any time.
So outer functions were doing locking and then calling into inner
helpers that would take their own lock (because they were called from
other contexts that did not lock).

Rather than try to rework this right now, this just creates a set of
spinlocks corresponding to the recursive states in which they are
taken, to preserve the existing semantics exactly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
b29fb220b1 kernel/timer: Spinlockify
Simple global lock around the timer API.  Actually a lot of this usage
was using needless vestigial locking around existing scheduler and
timeout APIs that are now internally synchronized.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f582b55dd6 kernel/pipe: Spinlockify
One spinlock per pipe object.  Also removed some vestigial locking
around _ready_thread().  That call is internally synchronized now.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
d27d4e6af2 kernel/sched: Remove remaining irq_lock use
The k_sleep() locking was actually to protect the _current state from
preemption before the context switch, so document that and replace
with a spinlock.  Should probably unify this with the rather cleaner
logic in pend_curr(), but right now "sleeping" and "pended" are
needlessly distinct states.

And we can remove the locking entirely from k_wakeup().  There's no
reason for any of that to need to be synchronized.  Even if we're
racing with other thread modifiations, the state on exit will be a
runnable thread without a timeout, or whatever timeout/pend state the
other side was requesting (i.e. it's a bug, but not one solved by
synhronization).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
be03dbd4c7 kernel/msg_q: Spinlockify
One lock per msgq.  Straightforward synchronization.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f0933d0ded kernel/stack: Spinlockify
One lock per stack.  Straightforward synchronization.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
9eeb6b8779 kernel/mbox: Spinlockify
Straightforward per-struct-k_mbox lock.  Nothing changes in locking
strategy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
7df0216d1e kernel/mutex: Spinlockify
Use a subsystem lock, not a per-object lock.  Really we want to lock
at mutex granularity where possible, but (1) that has non-trivial
memory overhead vs. e.g. directly spinning on the mutex state and (2)
the locking in a few places was originally designed to protect access
to the mutex *owner* priority, which is not 1:1 with a single mutex.

Basically the priority-inheriting mutex code will need some rework
before it works as a fine-grained locking abstraction in SMP.

Note that this fixes an invisible bug: with the older code,
k_mutex_unlock() would actually call irq_unlock() twice along the path
where there was a new owner, which is benign on existing architectures
(so long as the key argument is unchanged) but was never guaranteed to
work.  With a spinlock, unlocking an unlocked/unowned lock is a
detectable assertion condition.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
603ea42764 kernel/queue: Spinlockify
Straightforward port.  Each struct k_queue object gets a spinlock to
control obvious data ownership.

Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock.  But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled.  The spinlock API detects that as a recursive lock when
asserts are enabled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f6521a360d kernel/thread_abort: Remove needless locking
The two APIs protected by this lock are themselves internally
synchronized.  Replace the irq_lock with a spinlock anyway, because
what I think it's doing is trying to prevent a race where something
else like an ISR or something it wakes up mucks with the thread before
this completes.  Seems fragile on SMP as it stands, but this preserves
behavior on uniprocessor architectures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
c0bdcbaaf8 kernel/mem_slab: Spinlockify
Use a subsystem lock instead of a per-slab lock for now

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
e456d0f7dd kernel/thread: Spinlockify
Straightforward spinlock around the global thread state.  Two changes
to the locking strategy were needed:

1. There was a needless recursive lock taken in schedule_new_thread().
This is only ever invoked in circumstances where the lock was already
held, or where there is no need for internal synchronization.

2. The recursive irq_lock() around the loop that spawns the initial
static threads (which happens at the start of main thread execution)
was removed.  Most of the job (i.e. making sure the threads don't run
before the loop is finished) was already duplicated by the sched_lock
it was already taking, and the attempt to promise that all the
timeouts happen on the same tick is already true by construction at
system startup on uniprocessor systems, and not possible to guarantee
at all under SMP (where other CPUs can take that timer interrupt).  We
don't document or test for this feature, so don't try to be fancy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
84b47a9290 kernel/mempool: Spinlockify
Really the locking in this file is vestigial.  It only exists because
the scheduler's _unpend_all() call to wake up everyone waiting on a
wait_q is unsynchronized, because it was written to assume
irq_lock-style-locking.  It would be cleaner to put that locking into
the wait_q itself and/or use the scheduler's subsystem lock.  But it's
not clear there's any performance benefit, so let's stick with the
more easily verifiable change first.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f2b1a4bb64 kernel/poll: Spinlockify
Poll gets a single subsystem lock for now.  The existing locking in
Ben's code is subtle, being used both for latency control and for
critical section protection.  So getting each k_poll_event to use a
separate lock will require care and a little logic change.  Do the
simple version for now, which still works to decouple it from the
global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
1bf9bd04b1 kernel: Add _unlocked() variant to context switch primitives
These functions, for good design reason, take a locking key to
atomically release along with the context swtich.  But there's still a
common pattern in code to do a switch unconditionally by passing
irq_lock() directly.  On SMP that's a little hurtful as it spams the
global lock.  Provide an _unlocked() variant for
_Swap/_reschedule/_pend_curr for simplicity and efficiency.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
da37a53a54 kernel/k_sem: Spinlockify
Switch semaphores to use a subsystem spinlock instead of the system
irqlock.

Note that this is only "half way there".  Semaphores will no longer
contend with other irqlock users on SMP systems, but all semaphores
are still sharing the same lock.  Really we want semaphores to be
independently synchronized, but adding 4 bytes to every one (there are
a LOT of these things) for a separate spinlock is too much to pay.

Rather, a proper SMP-aware implementation would spin on the count
variable directly.  But let's not rock that boat quite yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
04382b9a2a kernel/mem_domain: Spinlockify
Simple locking requirements here mean we can just use a single
subsystem lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
32a29d2805 kernel/atomic_c: Spinlockify
Mostly useless patch.  All architectures have their own code for
atomic operations and don't use this fallback.  Still, it's a trivial
locking setup and we might as well.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
a37a981b21 kernel/work_q: Spinlockify
Each work_q object gets a separate spinlock to synchronize access
instead of the global lock.  Note that there was a recursive lock
condition in k_delayed_work_cancel(), so that's been split out into an
internal unlocked version and the API entry point that wraps it with a
lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
5aa7460e5c kernel/spinlock: Move validation out of header inlines
The validation checking recently added to spinlocks is useful, but
requires kernel-internals like _current and _current_cpu in a header
context that tends to be needed before those are declared (or where we
don't want them declared), and is causing big header dependency
headaches.

Move it to C code, it's just a validation tool, not a performance
thing.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
53cae5f471 kernel: Use _reschedule() instead of _Swap() where possible
These two spots were duplicating logic that is already done inside
_reschedule(), which is the cleaner, less dangerous API.  Use it where
possible when outside the scheduler internals.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
dc0713a706 kernel: Cleanup. Remove redundant test when calling _Swap()
_Swap() must already handle the case where _get_next_ready_thread() is
the same as _current.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Kumar Gala
bfaaa6bbe9 dts: Convert CONFIG_CCM to DT_CCM
Since we know do DTS before Kconfig we should try and remove dts from
creating Kconfig namespaced symbols and leave that to Kconfig.  So
rename CONFIG_CCM_<FOO> to DT_CCM_<FOO>.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-08 10:29:57 -06:00
Piotr Zięcik
d02e3ebd4c power: Eliminate SYS_PM_* power states.
The power management framework used two different abstractions
to describe power states. The SYS_PM_* given coarse information
what kind of power state (low power or deep sleep) was used,
while the SYS_POWER_STATE_* abstraction provided information
about particular power mode.

This commit removes the SYS_PM_* abstraction as the same
information is already carried in SYS_POWER_STATE_*.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-08 09:07:00 -05:00
Andrew Boie
41f6011c36 userspace: remove APPLICATION_MEMORY feature
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.

To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
4b4f773484 libc: set up memory partitions
* Newlib now defines a special z_newlib_partition containing
  all globals relevant to newlib. Most of these are in libc.a
  with a heap tracking variable in newlib's hooks.

* Both C libraries now expose a k_mem_partition containing the
  bounds of the malloc heap arena. Threads that want to use
  libc malloc() will need to add this to their memory domain.

* z_newlib_get_heap_bounds has been removed, in favor of the
  memory partition for the heap arena

* ztest now includes the C library partitions in its memory
  domain.

* The mem_alloc test now runs in user mode to prove that this
  all works for both C libraries.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
7ecc359f2c userspace: do not auto-cleanup static objects
Dynamic kernel objects enforce that the permission state
of an object is also a reference count; using a kernel
object without permission regardless of caller privilege
level is a programming bug.

However, this is not the case for static objects. In
particular, supervisor threads are allowed to use any
object they like without worrying about permissions, and
the logic here was causing cleanup functions to be called
over and over again on kernel objects that were actually
in use.

The automatic cleanup mechanism was intended for
dynamic objects anyway, so just skip it entirely for
static objects.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andy Gross
2e8cdc1e7f kernel: Enforce k_mem_slab block size alignment
This patch puts checks in place to ensure that callers to the k_mem_slab
APIs provide word aligned block sizes.  If this is not done, this can
result in unaligned accesses and subsequent crashes.

Signed-off-by: Andy Gross <andy.gross@linaro.org>
2019-02-06 07:18:45 -05:00
Ioannis Glaropoulos
6c54cac73d kernel: mem_domain: extend sane_partition for non-overlapping regions
This commit extends the implementation of sane_partition(..) in
kernel/mem_domain.c so that it generates an ASSERT if partitions
inside a mem_domain overlap. This extension is only implemented
for the case when the MPU requires non-overlapping regions.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-02-05 09:28:59 -08:00
Anas Nashif
427cc77115 kernel: fix smp build on esp32
set_kernel_idle_time_in_ticks is not used in non SMP code.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-02-04 18:16:58 -05:00
Daniel Leung
4bb10eeada kernel/sched: fix CPU mask kconfig typo
The kconfig used in BUILD_ASSERT_MSG() is missing a "S".
So add it back.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-02-04 15:53:09 -05:00
Andy Ross
ab46b1b3c5 kernel/sched: CPU mask affinity/pinning API
This adds a simple implementation of SMP CPU affinity to Zephyr.  The
API is simple and doesn't try to invent abstractions like "cpu sets".
Each thread has an enable/disable flag associated with each CPU in the
system, and the bits can be turned on and off (for threads that are
not currently runnable, of course) using an easy three-function API.

Because the implementation picked requires enumerating runnable
threads in priority order looking for one that match the current CPU,
this is not a good fit for the SCALABLE or MULTIQ scheduler backends,
so it currently can be enabled only for SCHED_DUMB (which is the
default anyway).  Fancier algorithms do exist, but even the best of
them scale as O(N_CPUS), so aren't quite constant time and often
require significant memory overhead to keep separate lists for
different cpus/sets.

The intended use here is for apps that want to "pin" threads to
specific CPUs for latency control, or conversely to prevent certain
threads from taking time on specific CPUs to leave them free for fast
response.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 21:37:24 -05:00
Andy Ross
6d9106f288 kernel/init: Fix dummy thread initialization on SMP systems
When under SMP, _current is a macro that indirects to a CPU-specific
address, and that trick won't work until kernel_arch_init() has
returned.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 19:10:08 -05:00