Commit graph

1484 commits

Author SHA1 Message Date
Anas Nashif
80e6a978a6 kernel/drivers: fix compile warnings
Uncovered by clang we have some functions being only used conditionally,
so gaurd them to make them only available when those conditions are met.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-07-01 22:58:23 +02:00
Ulf Magnusson
7727d1a48e kernel: Kconfig: Remove redundant 'default n' properties
Bool symbols implicitly default to 'n'.

A 'default n' can make sense e.g. in a Kconfig.defconfig file, if you
want to override a 'default y' on the base definition of the symbol. It
isn't used like that on any of these symbols though.

Also simplify the definitions of COOP_ENABLED, PREEMPT_ENABLED, and
SYS_CLOCK_EXISTS. 'default' (and def_bool) can take any expression, not
just a fixed value.

(It would work without the parentheses around the comparisons too.)

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2018-06-22 15:21:14 -04:00
Andy Ross
3d14615f56 kernel: Restore CONFIG_MULTITHREADING=n behavior
The prepare_multithreading()/switch_to_main_thread() steps were being
done unconditionally, when with multhreading disabled we want to jump
straight into the main thread on the existing stack.

Needless to say, that doesn't work well.  Fixes #8361.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-13 17:23:05 -04:00
Andy Ross
55a7e46b66 kernel/poll: Remove POLLING thread state bit
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state".  It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.

Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll.  The disadvantage is the 4
bytes of thread space needed.  Advantages:

+ Cleaner API, it's now internal to poll instead of being globally
  visible.

+ The thread_state bit space is just one byte, and was almost full
  already.

+ Smaller code to write/test a full word and not a bitfield

+ Words are atomic, so no need for one of irq lock/unlock pairs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:25:38 -04:00
Andy Ross
b173e4353f kernel/queue: Fix spurious NULL exit condition when using timeouts
The queue loop when CONFIG_POLL is in used has an inherent race
between the return of k_poll() and the inspection of the list where no
lock can be held.  Other contending readers of the same queue can
sneak in and steal the item out of the list before the current thread
gets to the sys_sflist_get() call, and the current loop will (if it
has a timeout) spuriously return NULL before the timeout expires.

It's not even a hard race to exercise.  Consider three threads at
different priorities: High (which can be an ISR too), Mid, and Low:

1. Mid and Low both enter k_queue_get() and sleep inside k_poll() on
   an empty queue.

2. High comes along and calls k_queue_insert().  The queue code then
   wakes up Mid, and reschedules, but because High is still running Mid
   doesn't get to run yet.

3. High inserts a SECOND item.  The queue then unpends the next thread
   in the list (Low), and readies it to run.  But as before, it won't
   be scheduled yet.

4. Now High sleeps (or if it's an interrupt, exits), and Mid gets to
   run.  It dequeues and returns the item it was delivered normally.

5. But Mid is still running!  So it re-enters the loop it's sitting in
   and calls k_queue_get() again, which sees and returns the second
   item in the queue synchronously.  Then it calls it a third time and
   goes to sleep because the queue is empty.

6. Finally, Low wakes up to find an empty queue, and returns NULL
   despite the fact that the timeout hadn't expired.

The fix is simple enough: check the timeout expiration inside the loop
so we don't return early.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:11:51 -04:00
Paul Sokolovsky
fd55935560 kernel: work_q: Document implications of default sys work_q priority
Default value of CONFIG_SYSTEM_WORKQUEUE_PRIORITY is -1, which means
it's run by the cooperative thread. Explicitly mention (in the Kconfig
help) that it means that any work handler submited to this default
queue won't be preempted by some other thread (which is generally
good, but worth documenting explicitly).

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-06-11 14:40:07 -04:00
Andrew Boie
2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Michael Scott
6c95dafd82 kernel: sched: use _is_thread_ready() in should_preempt()
We are using _is_thread_prevented_from_running() to see if the
_current thread can be preempted in should_preempt().  The idea
being that even if the _current thread is a high priority coop
thread, we can still preempt it when it's pending, suspended,
etc.

This does not take into account if the thread is sleeping.

k_sleep() merely removes the thread from the ready_q and calls
Swap().  The scheduler will swap away from the thread temporarily
and then on the next cycle get stuck to the sleeping thread for
however long the sleep timeout is, doing exactly nothing because
other functions like _ready_thread() use _is_thread_ready() as a
check before proceeding.

We should use !_is_thread_ready() to take into account when threads
are waiting on a timer, and let other threads run in the meantime.

Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
2018-06-04 08:21:47 -04:00
Michael Scott
f669a08eea kernel: thread: fix _THREAD_DUMMY check in _check_stack_sentinel()
All other checks of thread_state use a bit wise & operator incase
there are other flags attached to the thread_state.  Let's fix
the only outlier in _check_stack_sentinel() to be the same.

Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
2018-06-01 09:03:48 -04:00
Andy Ross
43553da9b2 kernel/sched: Fix preemption logic
The should_preempt() code was catching some of the "unrunnable" cases
but not all of them, opening the possibility of failing to preempt a
just-pended thread and thus waking it up synchronously.  There are
reports of this causing spin loops over k_poll() in the network stack
work queues (see #8049).

Note that the previous _is_dummy() call is folded into (the somewhat
verbosely named) _is_thread_prevented_from_running(), and that the
order of tests has been changed/optimized to hopefully catch common
cases earlier.

Suggested-by: Michael Scott <michael@opensourcefoundries.com>
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-31 16:46:14 -04:00
Andy Ross
eace1df539 kernel/sched: Fix SMP scheduling
Recent changes post-scheduler-rewrite broke scheduling on SMP:

The "preempt_ok" feature added to isolate preemption points wasn't
honored in SMP mode.  Fix this by adding a "swap_ok" field to the CPU
record (not the thread) which is set at the same time out of
update_cache().

The "queued" flag wasn't being maintained correctly when swapping away
from _current (it was added back to the queue, but the flag wasn't
set).

Abstract out a "should_preempt()" predicate so SMP and uniprocessor
paths share the same logic, which is distressingly subtle.

There were two places where _Swap() was predicated on
_get_next_ready_thread() != _current.  That's no longer a benign
optimization in SMP, where the former function REMOVES the next thread
from the queue.  Just call _Swap() directly in SMP, which has a
unified C implementation that does this test already.  Don't change
other architectures in case it exposes bugs with _Swap() switching
back to the same thread (it should work, I just don't want to break
anything).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-31 14:02:03 -04:00
Andy Ross
75398d2c38 kernel/mempool: Handle transient failure condition
The sys_mem_pool implementation has a subtle error case where it
detected a simultaneous allocation after having released the lock, in
which case exactly one of the racing allocators will return with
-EAGAIN (the other one suceeds of course).

I documented this condition at the lower level, but forgot to actually
handle it at the k_mem_pool level where we want to retry once before
going to sleep, as it doesn't generally represent an empty heap.  It
got caught by code auditing in:

https://github.com/zephyrproject-rtos/zephyr/issues/6757

(Full disclosure: I tested this by whiteboxing the first failure.  I
wasn't able to put together a rig to reliably exercise the actual
race.)

This patch also fixes a noop thinko in the return logic in the same
function, which contained:

   (ret == -EAGAIN) || (ret && ret != -ENOMEM)

The first term is needless and implied by the second.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-27 09:55:04 -04:00
Andy Ross
3a0cb2d35d kernel: Remove legacy preemption checking
The metairq feature exposed the fact that all of our arch code (and a
few mistaken spots in the scheduler too) was trying to interpret
"preemptible" threads independently.

As of the scheduler rewrite, that logic is entirely within sched.c and
doing it externally is redundant.  And now that "cooperative" threads
can be preempted, it's wrong and produces test failures when used with
metairq threads.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-25 09:40:55 -07:00
Carles Cufi
b54644913d kernel: Use IS-specific entropy function when available
During the early boot process, in prepare_multithreading(), the kernel
structures and scheduler are not ready yet. In order to obtain entropy
for early works such as stack randomization, optionally use when present
the ISR-specific function that some drivers will provide.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2018-05-24 15:13:13 -07:00
Leandro Pereira
fb0fba91a5 arch: x86: Rename CPU_NO_SPECTRE to CPU_NO_SPECTRE_V2
There's a new known variant, so make it clear what this one is for.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-24 13:07:12 -04:00
Andrew Boie
538754cb28 kernel: handle early entropy issues
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 19:38:06 -07:00
Kumar Gala
177bbbd35f kernel: Fix trivial typo in CONFIG_WAIT_Q_FAST
The Kconfig option is CONFIG_WAITQ_FAST not CONFIG_WAIT_Q_FAST.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-05-23 17:57:06 -04:00
Leandro Pereira
389c36439a kernel: init: Use entropy API directly to initialize stack canary
Some sys_rand32_get() implementation will use shared state and protect
that using some synchronization primitive such as a mutex or a
semaphore.  It's too early in the boot process to use any of them,
which causes some issues.

Use the entropy API directly to set up the stack canaries.

This doesn't completely solve the problem, as some drivers will use the
same synchronization primitives anyway.  Some drivers (e.g.  the NRF5
entropy driver) provide an API to be used by ISRs that might be
suitable here, but not all drivers do that.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-23 14:42:49 -07:00
Andrew Boie
982d5c8f55 init: run kernel_arch_init() earlier
This was in prepare_multithreading(), which was moved
to after driver initialization and not before it.
The function now really just prepares system threads.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 17:22:19 -04:00
Andrew Boie
4afc6c9ff2 kernel: remove STACK_ALIGN checks
STACK_ALIGN has somewhat different semantics across our arches,
particularly ARC.

These checks are unnecessary, _new_thread() is required
to properly align stack sizes anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 15:05:15 -05:00
Andy Ross
4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross
7aa25fa5eb kernel: Add "meta IRQ" thread priorities
This patch adds a set of priorities at the (numerically) lowest end of
the range which have "meta-irq" behavior.  Runnable threads at these
priorities will always be scheduled before threads at lower
priorities, EVEN IF those threads are otherwise cooperative and/or
have taken a scheduler lock.

Making such a thread runnable in any way thus has the effect of
"interrupting" the current task and running the meta-irq thread
synchronously, like an exception or system call.  The intent is to use
these priorities to implement "interrupt bottom half" or "tasklet"
behavior, allowing driver subsystems to return from interrupt context
but be guaranteed that user code will not be executed (on the current
CPU) until the remaining work is finished.

As this breaks the "promise" of non-preemptibility granted by the
current API for cooperative threads, this tool probably shouldn't be
used from application code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross
1856e2206d kernel/sched: Don't preempt cooperative threads
The scheduler rewrite added a regression in uniprocessor mode where
cooperative threads would be unexpectedly preempted, because nothing
was checking the preemption status of _current at the point where the
next-thread cache pointer was being updated.

Note that update_cache() needs a little more context: spots like
k_yield() that leave _current runable need to be able to tell it that
"yes, preemption is OK here even though the thread is cooperative'.
So it has a "preempt_ok" argument now.

Interestingly this didn't get caught because we don't test that.  We
have lots and lots of tests of the converse cases (i.e. making sure
that threads get preempted when we expect them to), but nothing that
explicitly tries to jump in front of a cooperative thread.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andrew Boie
72c7ded561 kernel: prepare threads after PRE_KERNEL*
prepare_multithreading() was done very early as it had a call
to initialize the interrupt subsystem. This was causing problems
with stack pointer randomization as any HW-based entropy drivers
had not been initialized.

Move the call to initialize the interrupt system out of
prepare_multithreading(), which now really does just prepare
the system to start threads. This is now done after the PRE_KERNEL
phases.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-22 15:59:07 -07:00
Ulf Magnusson
aa26289458 kconfig: Get rid of leading/trailing whitespace in prompts
Leading/trailing whitespace in prompts requires ugly workarounds in
genrest.py, as e.g. *prompt * is invalid RST. strip() all prompts in
Kconfiglib and get rid of the genrest.py workarounds. Add a warning too.

The Kconfiglib update has some unrelated cleanups and fixes (that won't
affect Zephyr).

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2018-05-19 09:26:39 +03:00
Andy Ross
3ce9c84ba8 kernel: Wait queues aren't dlists anymore
These assertions snuck through in crossed pull requests.  There's a
specific API for _wait_q_t now, you can't hit the list directly
(because it might be a tree).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross
1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross
c0ba11b281 kernel: Don't _arch_switch() to yourself
The SMP testing missed the case where _Swap() decides to return back
into the _current.  Obviously there is no valid switch handle for the
running thread into which we can restore, and everything blows up.
(What happened is that the new scheduler code opened up a spot where
k_thread_priority_set() does a _reschedule() unconditionally and
doens't check to see whether or not it's needed like the old code).

But that isn't incorrect!  It's entirely possible that _Swap() may
find that no thread is runnable except _current (due, for example, to
another CPU racing the other thread you expected off to sleep or
something).  Don't blow up, check and return a noop.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Krzysztof Chruscinski
9666c30d5f kernel: mem_slab: Reschedule in k_mem_slab_free only when necessary.
Rescheduling was called unconditionally at the end of k_mem_slab_free
call. It is necessary only when thread is pending in the wait queue.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2018-05-18 20:16:50 +03:00
Andy Ross
ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andy Ross
4ca0e07088 kernel: Add _unpend_all convenience wrapper to scheduler API
Refactoring.  Mempool wants to unpend all threads at once.  It's
cleaner to do this in the scheduler instead of the IPC code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie
3772f77119 k_poll: expose to user mode
k_poll is now accessible from user mode. A memory allocation takes place
from the caller's resource pool to copy the provided poll_events
array; this can be large enough to make allocating it on the stack
not preferable.

k_poll_signal are now proper kernel objects. Two APIs have been added,
one to reset the signaled state and one to check the current signaled
state and result value.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie
8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie
2b9b4b2cf7 k_queue: allow user mode access via allocators
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.

FIFO/LIFOs are derived from k_queues and have had allocator functions
added.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie
47fa8eb98c userspace: generate list of kernel object sizes
This used to be done by hand but can easily be generated like
we do other switch statements based on object type.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
f3bee951b1 kernel: stacks: add k_stack_alloc() init
Similar to what has been done with pipes and message queues,
user mode can't be trusted to provide a buffer for the kernel
to use. Remove k_stack_init() as a syscall and offer
k_stack_alloc_init() which allocates a buffer from the caller's
resource pool.

Fixes #7285

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
0fe789ff2e kernel: add k_msgq_alloc_init()
User mode can't be trusted to provide a memory buffer to
k_msgq_init(). Introduce k_msgq_alloc_init() which allocates
the buffer out of the calling thread's resource pool and expose
that as a system call instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
44fe81228d kernel: pipes: add k_pipe_alloc_init()
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.

K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
97bf001f11 userspace: get dynamic objs from thread rsrc pools
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.

Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.

A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
337e74334c userspace: automatic resource release framework
An object's set of permissions is now also used as a form
of reference counting. If an object's permission bitmap gets
completely cleared, it is now possible to specify object type
specific cleanup functions to be implicitly called.

Currently no objects are enabled yet. Forthcoming patches
will do this on a per object basis.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
e9cfc54d00 kernel: remove k_object_access_revoke() as syscall
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.

We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.

However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
a2480bd472 mempool: add API for malloc semantics
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.

Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Adithya Baglody
5133cf56aa kernel: thread: Move out the function _thread_entry() to lib
The _thread_entry() is not really a part of the kernel but a part of
the zephyr's C runtime support library. Hence moving just the
function to lib/thread_entry.c

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Adithya Baglody
8618716c68 kernel: Cmake: Add __ZEPHYR_SUPERVISOR__ macro for kernel files.
Normally a syscall would check the current privilege level and then
decide to go to _impl_<syscall> directly or go through a
_handler_<syscall>.
__ZEPHYR_SUPERVISOR__ is a compiler optimization flag which will
make all the system calls from the kernel files directly link
to the _impl_<syscall>. Thereby reducing the overhead of checking the
privileges.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Ramakrishna Pallala
110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andrew Boie
42a2c96422 newlib: fix heap user mode access for MPU devices
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.

MPU devices that don't have these restrictions allocate
the heap as normal.

In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.

Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.

On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.

Fixes: #6814

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-10 15:09:02 -07:00
David B. Kinder
3e136b4d23 doc: fix misspellings in doc and Kconfig files
Fix misspellings missed during regular PR reviews.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-05-09 15:06:43 -05:00
Andy Ross
15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Andy Ross
eb258706e0 kernel: Move SMP initialization to start of main thread
The smp_init() call was too early.  Device and subsystem
initialization doesn't happen until after the main thread starts
running.  Starting extra CPUs and allowing them to schedule threads
before their drivers are alive is a bad idea, even if it works in a
unit test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00