Commit graph

3196 commits

Author SHA1 Message Date
Andrew Boie
97bf001f11 userspace: get dynamic objs from thread rsrc pools
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.

Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.

A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
337e74334c userspace: automatic resource release framework
An object's set of permissions is now also used as a form
of reference counting. If an object's permission bitmap gets
completely cleared, it is now possible to specify object type
specific cleanup functions to be implicitly called.

Currently no objects are enabled yet. Forthcoming patches
will do this on a per object basis.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
e9cfc54d00 kernel: remove k_object_access_revoke() as syscall
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.

We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.

However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
a2480bd472 mempool: add API for malloc semantics
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.

Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Adithya Baglody
5133cf56aa kernel: thread: Move out the function _thread_entry() to lib
The _thread_entry() is not really a part of the kernel but a part of
the zephyr's C runtime support library. Hence moving just the
function to lib/thread_entry.c

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Adithya Baglody
8618716c68 kernel: Cmake: Add __ZEPHYR_SUPERVISOR__ macro for kernel files.
Normally a syscall would check the current privilege level and then
decide to go to _impl_<syscall> directly or go through a
_handler_<syscall>.
__ZEPHYR_SUPERVISOR__ is a compiler optimization flag which will
make all the system calls from the kernel files directly link
to the _impl_<syscall>. Thereby reducing the overhead of checking the
privileges.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Ramakrishna Pallala
110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andrew Boie
42a2c96422 newlib: fix heap user mode access for MPU devices
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.

MPU devices that don't have these restrictions allocate
the heap as normal.

In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.

Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.

On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.

Fixes: #6814

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-10 15:09:02 -07:00
David B. Kinder
3e136b4d23 doc: fix misspellings in doc and Kconfig files
Fix misspellings missed during regular PR reviews.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-05-09 15:06:43 -05:00
Andy Ross
15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Andy Ross
eb258706e0 kernel: Move SMP initialization to start of main thread
The smp_init() call was too early.  Device and subsystem
initialization doesn't happen until after the main thread starts
running.  Starting extra CPUs and allowing them to schedule threads
before their drivers are alive is a bad idea, even if it works in a
unit test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Leandro Pereira
39dc7d03f7 scripts: gen_kobject_list: Generate enums and case statements
Adding a new kernel object type or driver subsystem requires changes
in various different places.  This patch makes it easier to create
those devices by generating as much as possible in compile time.

No behavior change.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-26 02:57:12 +05:30
Leandro Pereira
c200367b68 drivers: Perform a runtime check if a driver is capable of an operation
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.

Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.

Fixes #6907.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-26 02:57:12 +05:30
Andy Ross
e7ded11a2e kernel: Prune ksched.h of dead code
There was a ton of junk in this header.  Pare it down to just the
stuff actually used by code outside of sched.c, move the needed
internal stuff into sched.c itself, and drop everything else.

Note that (other than the tiny inlines that remain here in the header)
the scheduler interface exposed to the rest of the system is now
composed of just 12 functions.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-25 13:13:23 -07:00
Andrew Boie
31bdfc014e userspace: add support for dynamic kernel objects
A red-black tree is maintained containing the metadata for all
dynamically created kernel objects, which are allocated out of the
system heap.

Currently, k_object_alloc() and k_object_free() are supervisor-only.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-24 12:27:54 -07:00
Leandro Pereira
f5f95ee3a9 kernel: sem: Ensure that initial count is lesser or equal than limit
Ensure this value during static initialization (with build assertions),
and dynamic initializations through system calls.

If initial count is larger than the limit, it's possible for the count
to wraparound, causing locking issues.

Expanding the BUILD_ASSERT() macros after declaring a k_sem struct in
K_SEM_DEFINE() is necessary to support cases where a semaphore is
defined statically.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-24 04:04:36 +05:30
Leandro Pereira
16472cafcf arch: x86: Use retpolines in core assembly routines
In order to mitigate Spectre variant 2 (branch target injection), use
retpolines for indirect jumps and calls.

The newly-added hidden CONFIG_X86_NO_SPECTRE flag, which is disabled
by default, must be set by a x86 SoC if its CPU performs speculative
execution.  Most targets supported by Zephyr do not, so this is
set to "y" by default.

A new setting, CONFIG_RETPOLINE, has been added to the "Security
Options" sections, and that will be enabled by default if
CONFIG_X86_NO_SPECTRE is disabled.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-24 04:00:01 +05:30
Andy Ross
8a4b2e8cf2 kernel, posix: Move ready_one_thread() to scheduler
The POSIX layer had a simple ready_one_thread() utility.  Move this to
the scheduler API (with a prepended underscore -- it's an internal
API) so that it can be synchronized along with the rest of the
scheduler.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross
22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross
5792ee6da2 kernel/mutex: Clean up k_mutex_unlock()
Recent changes to the scheduler API means we can simplify this
further: move the assignment to mutex->owner outside the if(), which
removes the need to have an else clause (which just set that field to
NULL when the new_owner was already NULL); and we can likewise move
the irq_unlock() outside the block.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross
15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross
0447a73f6c kernel: include cleanup
Recent changes have eliminated most use of _Swap() in favor of higher
level scheduler abstractions.  We can remove the header too.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross
e0a572beeb kernel: Refactor, unifying _pend_current_thread() + _Swap() idiom
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps.  Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross
8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross
b481d0a045 kernel: Allow pending w/o wait_q for scheduler API cleanup
The mailbox code was written to use the _remove_thread_from_ready_q()
API directly, which would be good to get out of the scheduler internal
API.  What it really wanted to do is to mark a thread "PENDING"
without actually adding it to a wait queue, which is sane enough (the
message stores the "thread to wake up on receipt" handle).

So allow that naturally in the _pend_thread() API by passing a NULL
wait_q.  Really a wait_q needn't be the only way a thread can block.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Leandro Pereira
541c3cb18b kernel: sched: Fix validation of priority levels
A priority value cannot be simultaneously higher than the maximum
possible value and smaller than the minimum value.  Rewrite the
_VALID_PRIO() macro as a function so that this if either of these
invariants are invalid, the priority is considered invalid.

Coverity-CID: 182584
Coverity-CID: 182585
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-21 08:39:42 -07:00
Wayne Ren
56c2bc96a6 kernel: add CODE_UNREACHABLE in _StackCheckHandler
* _StackCheckHandler is FUNC_NORETURN
* if _ARCH_EXCPET is redefined for specific arch and
  has function return in some cases, e.g., interrupt or
  exception, a compiler warning will come out
* So add CODE_UNREACHABLE to guarantee it will not return

Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
2018-04-17 10:50:12 -07:00
Leandro Pereira
85dcc97db9 kernel: mempool: Always check for overflow in k_calloc()
Assertions should never be used to test for error conditions, such as
checking for overflows.  It should only be used to test for invariants.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-12 14:27:24 -07:00
Leandro Pereira
b902da3599 kernel: mempool: Check for overflow in k_malloc()
If a large size is requested, the expression `size += sizeof(...)`
might overflow, leading to a small block being requested and returned
by k_malloc().

Use a GCC builtin to trap the overflow and return NULL in this case.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-12 14:27:24 -07:00
Anas Nashif
c7f5cc9bcb license: fix spdx identifier in a few files
Use correct SPDX identifier for Apache 2.0.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-04-12 15:19:51 -04:00
Kumar Gala
79d151f81d kernel: Fix building of k_thread_create
commit ec7ecf7900 moved some code around
such that the total_size variable is used regardless of how
CONFIG_MPU_REQUIRES_POWER_OF_TWO_ALIGNMENT is set.  So move the
decleration of total_size outside of the ifndef block so things build
properly.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-04-10 22:26:01 -04:00
Andrew Boie
ec7ecf7900 kernel: restore stack size check
The handler for k_thread_create() wasn't verifying that the
provided stack size actually fits in the requested stack object
on systems that enforce power-of-two size/alignment for stacks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-10 10:58:12 -04:00
Anas Nashif
daf7716ddd build: use git version and hash for boot banner
This uses the version and hash (git describe) and replaces the timestamp
currently used in the boot banner. This works much better than using
timestamps. It lets us point to the exact commit being used to run a
certain application or test.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-04-10 10:57:50 -04:00
Josh Triplett
18cb832646 kernel: Disable build timestamps by default for reproducibility
To make Zephyr builds more reproducible, default to disabling build
timestamps. Expand the documentation for CONFIG_BUILD_TIMESTAMP to
explain that enabling it will make the build unreproducible.

Signed-off-by: Josh Triplett <josh@joshtriplett.org>
2018-04-09 18:52:55 -04:00
Leandro Pereira
bf44bacd24 kernel: mutex: Copy assertions to assertions to syscall handler
Always ensure that the mutex owner is the current thread and that the
count is sane.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-06 11:52:32 -07:00
Ramakrishna Pallala
f603e603bb lib: posix: Move posix layer from 'kernel' to 'lib'
Move posix layer from 'kernel' to 'lib' folder as it is not
a core kernel feature.

Fixed posix header file dependencies as part of the move and
also removed NEWLIBC related macros from posix headers.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-04-05 16:43:05 -04:00
Andrew Boie
aa6de29c4b lib: user mode compatible mempools
We would like to offer the capability to have memory pool heap data
structures that are usable from user mode threads. The current
k_mem_pool implementation uses IRQ locking and system-wide membership
lists that make it incompatible with user mode constraints.

However, much of the existing memory pool code can be abstracted to some
common functions that are used by both k_mem_pool and the new
sys_mem_pool implementations.

The sys_mem_pool implementation has the following differences:

* The alloc/free APIs work directly with pointers, no internal memory
block structures are exposed to the end user. A pointer to the source
pool is provided for allocation, but freeing memory just requires the
pointer and nothing else.

* k_mem_pool uses IRQ locks and required very fine-grained locking in
order to not affect system latency. sys_mem_pools just use a semaphore
to protect the pool data structures at the API level, since there aren't
implications for system responsiveness with this kind of concurrency
control.

* sys_mem_pools do not support the notion of timeouts for requesting
memory.

* sys_mem_pools are specified at compile time with macros, just like
kernel memory pools. Alternative forms of specification at runtime
will be a later enhancement.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-05 07:03:05 -07:00
Youvedeep Singh
f762fdf482 kernel: posix: move sleep and usleep functions into c file.
Currently sleep and usleep functions are into unistd.h file.
unistd includes toold chain secific unistd.h file and this file
too has declaration for these functions. This is in conflict when
posix specific unistd.h is included.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2018-04-05 08:15:55 -04:00
Leandro Pereira
1ccd715577 kernel: thread: Consider stack pointer fuzz underflow
When randomizing the stack pointer on thread creation
(CONFIG_STACK_POINTER_RANDOM), the fuzz amount might exceed the stack
size, causing an underflow.

Ensure that this will never underflow by only adjusting the stack size
if there's enough space.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-03 12:32:56 -07:00
Youvedeep Singh
4a8b2d2d2f kernel: POSIX: Compatibility layer for POSIX message queue APIs.
This patch provides POSIX message queue APIs for POSIX
1003.1 PSE52 standard.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2018-04-03 15:30:44 -04:00
Youvedeep Singh
188c1ab5ca kernel: msg_q: Add routine to fetch basic attrs from message queue.
For posix layer implementation of message queue, we need to fetch
basic attributes of message queue. Currently this routine is not
present in Zephyr. So adding this routing into message queue.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2018-04-03 15:30:44 -04:00
Youvedeep Singh
2341bf93de kernel: posix: reorganize posix internal function.
calculate_timeout function calcualtes timeout in msecs
from timespec. It is used multiple place inside posix
code. So moving it under pthead_common.c file.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2018-04-03 15:30:44 -04:00
Kristian Klomsten Skordal
c39e2a2d6c kernel: Fix left shift into sign bit
The result of left shifting a bit into the sign-bit is undefined
behavior. This makes the offending shift operation unsigned.

Signed-off-by: Kristian Klomsten Skordal <kristian.skordal@nordicsemi.no>
2018-03-22 19:16:17 -04:00
Juan Manuel Torres Palma
342da7ac72 posix: semaphore: fix bugs and simplify code
Modifies several functions that are causing wrong
behaviour.

 * semaphore.h: add missing restrict keyword.
 * sem_destroy(): check that nobody is waiting
   before destroying the object.
 * sem_timedwait(): simpify function logic and
   fix a bug when abstime > currtime, that passed
   ticks instead of ms to k_sem_take().
 * sem_wait(): avoid unnecessary checks.
 * sem_init(): add pshared value assertion.

Signed-off-by: Juan Manuel Torres Palma <j.m.torrespalma@gmail.com>
2018-03-21 14:27:47 -07:00
Anas Nashif
8470b4d365 kernel: kconfig: reorg kernel Kconfig a bit
Move INIT_STACK to debug options and give POSIX layer its own menu.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-19 15:37:26 -04:00
Anas Nashif
ee9bebf7d0 kernel: smp: group SMP options in Kconfig file
Move SMP option together and align help text.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-19 15:37:26 -04:00
Anas Nashif
bb64ec2921 lib: move ring_buffer Kconfig to lib/, cleanup lib/Kconfig
* ring_bufffer is in lib, so move the Kconfig out of the kernel.
* move one Kconfig used for json to lib/Kconfig alongside other
  Kconfigs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-19 15:37:26 -04:00
Andy Ross
81242985c2 kernel/sched: Clean up docs for _pend_thread(), limit scope
The scheduler has a kernel-internal _pend_thread() utility which
sounds like a function which will add an arbitrary thread to a wait_q.
This is essentially unsupportable in SMP, where that thread might
actually be executing on a different CPU.

Thankfully we never used it like that.  The only spots outside the
scheduler that use the API are in pipes and mailbox, which both just
want to pend a DUMMY thread to track the timeout but will never try to
pend a true foreign thread.

Clarify the comment and add an assertion to make sure this promise
isn't broken in the future.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andy Ross
345553b19b kernel/queue: Clean up scheduler API usage
This was the only spot where the scheduler-internal
_peek_first_pending_thread() API was used.  Given that this kind of
thing is inherently racy (it may not be pending as long as you expect
if a timeout expires, etc...), it would be nice to retire it.

And as it happens all the queue code was using it for was to detect
the case of a non-empty wait_q over which it was looping, which is
trivial to do without API support.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00