Commit graph

353 commits

Author SHA1 Message Date
Andy Ross 225c74bbdf kernel/Kconfig: Reorgnize wait_q and sched algorithm choices
Make these "choice" items instead of a single boolean that implies the
element unset.

Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-07-03 17:09:15 -04:00
Anas Nashif 80e6a978a6 kernel/drivers: fix compile warnings
Uncovered by clang we have some functions being only used conditionally,
so gaurd them to make them only available when those conditions are met.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-07-01 22:58:23 +02:00
Michael Scott 6c95dafd82 kernel: sched: use _is_thread_ready() in should_preempt()
We are using _is_thread_prevented_from_running() to see if the
_current thread can be preempted in should_preempt().  The idea
being that even if the _current thread is a high priority coop
thread, we can still preempt it when it's pending, suspended,
etc.

This does not take into account if the thread is sleeping.

k_sleep() merely removes the thread from the ready_q and calls
Swap().  The scheduler will swap away from the thread temporarily
and then on the next cycle get stuck to the sleeping thread for
however long the sleep timeout is, doing exactly nothing because
other functions like _ready_thread() use _is_thread_ready() as a
check before proceeding.

We should use !_is_thread_ready() to take into account when threads
are waiting on a timer, and let other threads run in the meantime.

Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
2018-06-04 08:21:47 -04:00
Andy Ross 43553da9b2 kernel/sched: Fix preemption logic
The should_preempt() code was catching some of the "unrunnable" cases
but not all of them, opening the possibility of failing to preempt a
just-pended thread and thus waking it up synchronously.  There are
reports of this causing spin loops over k_poll() in the network stack
work queues (see #8049).

Note that the previous _is_dummy() call is folded into (the somewhat
verbosely named) _is_thread_prevented_from_running(), and that the
order of tests has been changed/optimized to hopefully catch common
cases earlier.

Suggested-by: Michael Scott <michael@opensourcefoundries.com>
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-31 16:46:14 -04:00
Andy Ross eace1df539 kernel/sched: Fix SMP scheduling
Recent changes post-scheduler-rewrite broke scheduling on SMP:

The "preempt_ok" feature added to isolate preemption points wasn't
honored in SMP mode.  Fix this by adding a "swap_ok" field to the CPU
record (not the thread) which is set at the same time out of
update_cache().

The "queued" flag wasn't being maintained correctly when swapping away
from _current (it was added back to the queue, but the flag wasn't
set).

Abstract out a "should_preempt()" predicate so SMP and uniprocessor
paths share the same logic, which is distressingly subtle.

There were two places where _Swap() was predicated on
_get_next_ready_thread() != _current.  That's no longer a benign
optimization in SMP, where the former function REMOVES the next thread
from the queue.  Just call _Swap() directly in SMP, which has a
unified C implementation that does this test already.  Don't change
other architectures in case it exposes bugs with _Swap() switching
back to the same thread (it should work, I just don't want to break
anything).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-31 14:02:03 -04:00
Andy Ross 3a0cb2d35d kernel: Remove legacy preemption checking
The metairq feature exposed the fact that all of our arch code (and a
few mistaken spots in the scheduler too) was trying to interpret
"preemptible" threads independently.

As of the scheduler rewrite, that logic is entirely within sched.c and
doing it externally is redundant.  And now that "cooperative" threads
can be preempted, it's wrong and produces test failures when used with
metairq threads.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-25 09:40:55 -07:00
Andy Ross 4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross 7aa25fa5eb kernel: Add "meta IRQ" thread priorities
This patch adds a set of priorities at the (numerically) lowest end of
the range which have "meta-irq" behavior.  Runnable threads at these
priorities will always be scheduled before threads at lower
priorities, EVEN IF those threads are otherwise cooperative and/or
have taken a scheduler lock.

Making such a thread runnable in any way thus has the effect of
"interrupting" the current task and running the meta-irq thread
synchronously, like an exception or system call.  The intent is to use
these priorities to implement "interrupt bottom half" or "tasklet"
behavior, allowing driver subsystems to return from interrupt context
but be guaranteed that user code will not be executed (on the current
CPU) until the remaining work is finished.

As this breaks the "promise" of non-preemptibility granted by the
current API for cooperative threads, this tool probably shouldn't be
used from application code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross 1856e2206d kernel/sched: Don't preempt cooperative threads
The scheduler rewrite added a regression in uniprocessor mode where
cooperative threads would be unexpectedly preempted, because nothing
was checking the preemption status of _current at the point where the
next-thread cache pointer was being updated.

Note that update_cache() needs a little more context: spots like
k_yield() that leave _current runable need to be able to tell it that
"yes, preemption is OK here even though the thread is cooperative'.
So it has a "preempt_ok" argument now.

Interestingly this didn't get caught because we don't test that.  We
have lots and lots of tests of the converse cases (i.e. making sure
that threads get preempted when we expect them to), but nothing that
explicitly tries to jump in front of a cooperative thread.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross 1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andy Ross 4ca0e07088 kernel: Add _unpend_all convenience wrapper to scheduler API
Refactoring.  Mempool wants to unpend all threads at once.  It's
cleaner to do this in the scheduler instead of the IPC code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andy Ross 15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Andy Ross e7ded11a2e kernel: Prune ksched.h of dead code
There was a ton of junk in this header.  Pare it down to just the
stuff actually used by code outside of sched.c, move the needed
internal stuff into sched.c itself, and drop everything else.

Note that (other than the tiny inlines that remain here in the header)
the scheduler interface exposed to the rest of the system is now
composed of just 12 functions.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-25 13:13:23 -07:00
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross e0a572beeb kernel: Refactor, unifying _pend_current_thread() + _Swap() idiom
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps.  Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross b481d0a045 kernel: Allow pending w/o wait_q for scheduler API cleanup
The mailbox code was written to use the _remove_thread_from_ready_q()
API directly, which would be good to get out of the scheduler internal
API.  What it really wanted to do is to mark a thread "PENDING"
without actually adding it to a wait queue, which is sane enough (the
message stores the "thread to wake up on receipt" handle).

So allow that naturally in the _pend_thread() API by passing a NULL
wait_q.  Really a wait_q needn't be the only way a thread can block.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Leandro Pereira 541c3cb18b kernel: sched: Fix validation of priority levels
A priority value cannot be simultaneously higher than the maximum
possible value and smaller than the minimum value.  Rewrite the
_VALID_PRIO() macro as a function so that this if either of these
invariants are invalid, the priority is considered invalid.

Coverity-CID: 182584
Coverity-CID: 182585
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-21 08:39:42 -07:00
Andy Ross 81242985c2 kernel/sched: Clean up docs for _pend_thread(), limit scope
The scheduler has a kernel-internal _pend_thread() utility which
sounds like a function which will add an arbitrary thread to a wait_q.
This is essentially unsupportable in SMP, where that thread might
actually be executing on a different CPU.

Thankfully we never used it like that.  The only spots outside the
scheduler that use the API are in pipes and mailbox, which both just
want to pend a DUMMY thread to track the timeout but will never try to
pend a true foreign thread.

Clarify the comment and add an assertion to make sure this promise
isn't broken in the future.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andy Ross 85bc0a3fe6 kernel: Cleanup, unify _add_thread_to_ready_q() and _ready_thread()
The scheduler exposed two APIs to do the same thing:
_add_thread_to_ready_q() was a low level primitive that in most cases
was wrapped by _ready_thread(), which also (1) checks that the thread
_is_ready() or exits, (2) flags the thread as "started" to handle the
case of a thread running for the first time out of a waitq timeout,
and (3) signals a logger event.

As it turns out, all existing usage was already checking case #1.
Case #2 can be better handled in the timeout resume path instead of on
every call.  And case #3 was probably wrong to have been skipping
anyway (there were paths that could make a thread runnable without
logging).

Now _add_thread_to_ready_q() is an internal scheduler API, as it
probably always should have been.

This also moves some asserts from the inline _ready_thread() wrapper
to the underlying true function for code size reasons, otherwise the
extra use of the inline added by this patch blows past code size
limits on Quark D2000.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andy Ross 9d367eeb0a xtensa, kernel/sched: Move next switch_handle selection to the scheduler
The xtensa asm2 layer had a function to select the next switch handle
to return into following an exception.  There is no arch-specific code
there, it's just scheduler logic.  Move it to the scheduler where it
belongs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Leandro Pereira a1ae8453f7 kernel: Name of static functions should not begin with an underscore
Names that begin with an underscore are reserved by the C standard.
This patch does not change names of functions defined and implemented
in header files.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-10 08:39:10 -05:00
Andy Ross 2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 32a444c54e kernel: Fix nano_internal.h inclusion
_Swap() is defined in nano_internal.h.  Everything calls _Swap().
Pretty much nothing that called _Swap() included nano_internal.h,
expecting it to be picked up automatically through other headers (as
it happened, from the kernel arch-specific include file).  A new
_Swap() is going to need some other symbols in the inline definition,
so I needed to break that cycle.  Now nothing sees _Swap() defined
anymore.  Put nano_internal.h everywhere it's needed.

Our kernel includes remain a big awful yucky mess.  This makes things
more correct but no less ugly.  Needs cleanup.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andrew Boie 9f38d2a91a kernel: have k_sched_lock call _sched_lock
Having two implementations of the same thing is bad,
especially when one can just call the other inline version.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-17 17:42:54 -05:00
Punit Vara ce60d04fb6 kernel: sched.c: Fix datatype mismatch in comparision
All arguments comes from userspace has data type u32_t but
base.prio has data type of s8_t. Comparision between s8_t and u32_t
cannot be done. That's why typecast priority coming from userspace(prio)
to s8_t data type.

Signed-off-by: Punit Vara <punit.vara@intel.com>
2017-11-14 09:49:00 -08:00
Leandro Pereira 6f99bdb02a kernel: Provide only one _SYSCALL_HANDLER() macro
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros.  Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-10-16 13:42:15 -04:00
Andrew Boie 5008fedc92 kernel: restrict user threads to worsen priority
User threads aren't trusted and shouldn't be able to alter the
scheduling assumptions of the system by making thread priorities more
favorable.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:24:48 -07:00
Andrew Boie 225e4c0e76 kernel: greatly simplify syscall handlers
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.

- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
  other than object verification

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 37ff5a9bc5 kernel: system call handler cleanup
Use new _SYSCALL_OBJ/_SYSCALL_OBJ_INIT macros.

Use new _SYSCALL_MEMORY_READ/_SYSCALL_MEMORY_WRITE macros.

Some non-obvious checks changed to use _SYSCALL_VERIFY_MSG.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 468190a795 kernel: convert most thread APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Andrew Boie 76c04a21ee kernel: implement some more system calls
These are needed to demonstrate the Philosophers demo with threads
running in user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Luiz Augusto von Dentz 87aa621915 kernel: Use SYS_DLIST_FOR_EACH_CONTAINER whenever possible
SYS_DLIST_FOR_EACH_CONTAINER is preferable over using
SYS_DLIST_FOR_EACH_NODE as that avoid casting directly which assumes the
node field is always at the beginning.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-25 09:08:50 -04:00
Youvedeep Singh f807d4db7e Scheduler: Same priority Preemptive threads should get equal time slice
If there are multiple preemptive threads with same priority, and any
one thread preempts before its time slice expires (due to yields/
semaphore take/queue etc), then next schedules thread is getting
lower time slide than expected.
This patch fixes this issue by accounting time expired when a thread
releases CPU before its time slide expires.

Jira: ZEP-2217/ZEP-2218

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-08-08 08:51:24 -04:00
Andrew Boie 3989de7e3b kernel: fix short time-slice reset
The kernel tracks time slice usage with the _time_slice_elapsed global.
Every time the timer interrupt goes off and the timer driver calls
_nano_sys_clock_tick_announce() with the elapsed time, this is added to
_time_slice_elapsed. If it exceeds the total time slice, the thread is
moved to the back of the queue for that priority level and
_time_slice_elapsed is reset to zero.

In a non-tickless kernel, this is the only time _time_slice_elapsed is
reset.  If a thread uses up a partial time slice, and then cooperatively
switches to another thread, the next thread will inherit the remaining
time slice, causing it not to be able to run as long as it ought to.

There does exist code to properly reset the elapsed count, but it was
only compiled in a tickless kernel. Now it is built any time
CONFIG_TIMESLICING is enabled.

Issue: ZEP-2107
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-02 14:47:01 -04:00
Maciek Borzecki 81bdee3592 kernel: make _dump_ready_q() static and visible only with CONFIG_KERNEL_DEBUG
Fixes sparse warning:
<snip>/zephyr/kernel/sched.c:368:6: warning: symbol '_dump_ready_q' was not declared. Should it be static?

Change-Id: I156e89f1d74178bbd99cc25e532da544c7ebee60
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Andrew Boie 5dcb279df8 debug: add stack sentinel feature
This places a sentinel value at the lowest 4 bytes of a stack
memory region and checks it at various intervals, including when
servicing interrupts or context switching.

This is implemented on all arches except ARC, which supports stack
bounds checking directly in hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Ramesh Thomas 89ffd44dfb kernel: tickless: Add tickless kernel support
Adds event based scheduling logic to the kernel. Updates
management of timeouts, timers, idling etc. based on
time tracked at events rather than periodic ticks. Provides
interfaces for timers to announce and get next timer expiry
based on kernel scheduling decisions involving time slicing
of threads, timeouts and idling. Uses wall time units instead
of ticks in all scheduling activities.

The implementation involves changes in the following areas

1. Management of time in wall units like ms/us instead of ticks
The existing implementation already had an option to configure
number of ticks in a second. The new implementation builds on
top of that feature and provides option to set the size of the
scheduling granurality to mili seconds or micro seconds. This
allows most of the current implementation to be reused. Due to
this re-use and co-existence with tick based kernel, the names
of variables may contain the word "tick". However, in the
tickless kernel implementation, it represents the currently
configured time unit, which would be be mili seconds or
micro seconds. The APIs that take time as a parameter are not
impacted and they continue to pass time in mili seconds.

2. Timers would not be programmed in periodic mode
generating ticks. Instead they would be programmed in one
shot mode to generate events at the time the kernel scheduler
needs to gain control for its scheduling activities like
timers, timeouts, time slicing, idling etc.

3. The scheduler provides interfaces that the timer drivers
use to announce elapsed time and get the next time the scheduler
needs a timer event. It is possible that the scheduler may not
need another timer event, in which case the system would wait
for a non-timer event to wake it up if it is idling.

4. New APIs are defined to be implemented by timer drivers. Also
they need to handler timer events differently. These changes
have been done in the HPET timer driver. In future other timers
that support tickles kernel should implement these APIs as well.
These APIs are to re-program the timer, update and announce
elapsed time.

5. Philosopher and timer_api applications have been enabled to
test tickless kernel. Separate configuration files are created
which define the necessary CONFIG flags. Run these apps using
following command
make pristine && make BOARD=qemu_x86 CONF_FILE=prj_tickless.conf qemu

Jira: ZEP-339 ZEP-1946 ZEP-948
Change-Id: I7d950c31bf1ff929a9066fad42c2f0559a2e5983
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:28 +00:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Kumar Gala 34a57db844 Revert "kernel: Convert formatter strings to use PRI defines"
This reverts commit 7b9dc107a8.

We revert this as we intent to move away from {u}int{8,16,32,64}_t types
to our own internal types for sized variables so we shouldn't need the
PRI macros anymore.

Change-Id: I1d9d797fee47ca266867ae65656c150f8fe2adb2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-19 10:50:51 -05:00
Kumar Gala 7b9dc107a8 kernel: Convert formatter strings to use PRI defines
To allow for various libc implementations (like newlib) in which the way
various {u}int{8,16,32}_t types are defined vary between both libc
implementations and across architectures we need to utilize the PRI
defines.

Change-Id: Ie884fb67015502288152ecbd64c37961a4f538e4
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-17 11:09:36 -05:00
Benjamin Walsh 8d7c274e55 kernel/sched: protect thread sched_lock with compiler barriers
This has not bitten us yet, but it was a ticking timebomb.

This is similar to the issue that was found with irq_lock/irq_unlock
implementations on several architectures. Having a volatile variable is
not the way to force the sched_lock variable to be
incremented/decremented around the accesses to data it protects.
Instead, a compiler barrier must prevent the compiler from reordering
the memory accesses around setting of sched_lock. Needed in the inline
implementations _sched_lock()/_sched_unlock_no_reschedule(), which
resolve to simple decrement/increment of the per-thread sched_lock
variable.

Change-Id: I06f5b3524889f193efe69caa947118404b1be0b5
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:21 +00:00
David B. Kinder ac74d8b652 license: Replace Apache boilerplate with SPDX tag
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.

Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.

Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file.  Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.

Jira: ZEP-1457

Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-19 03:50:58 +00:00
Benjamin Walsh 2f280416e6 kernel: fix total number of coop prios in coop-only mode
The idle priority was not accounted for.

With this change, the philosophers demo runs in coop-only mode.

Change-Id: I23db33687bcf3b2107d5fc07977143730f62e476
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-17 12:17:27 +00:00
Benjamin Walsh b8c2160a2b kernel: do not use sys_dlist_insert_at() in _pend_thread()
It's calling a function on every iteration, it's more efficient to just
do the logic inline.

Change-Id: I166e377d4ffb3056749fd625cb789173030904ac
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:26 +00:00
Benjamin Walsh e6a69cae54 kernel/arch: reverse polarity on sched_locked
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.

Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh 04ed860c68 kernel: make _thread.sched_locked a non-atomic operator variable
Not needed, since only the thread itself can modifiy its own
sched_locked count.

Change-Id: I3d3d8be548d2b24ca14f51637cc58bda66f8b9ee
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:23 +00:00
Benjamin Walsh 6209218f40 kernel: optimize ms-to-ticks for certain tick frequencies
Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.

- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
  shift since they are power-of-two factors of 1000.

In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.

These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).

Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.

Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:07 +00:00
Anas Nashif d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Renamed from kernel/unified/sched.c (Browse further)