Commit graph

463 commits

Author SHA1 Message Date
Anas Nashif 6ecadb03ab cleanup: include/: move misc/math_extras.h to sys/math_extras.h
move misc/math_extras.h to sys/math_extras.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif ee9dd1a54a cleanup: include/: move misc/dlist.h to sys/dlist.h
move misc/dlist.h to sys/dlist.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif e1e05a2eac cleanup: include/: move atomic.h to sys/atomic.h
move atomic.h to sys/atomic.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 10291a0789 cleanup: include/: move tracing.h to debug/tracing.h
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Andrew Boie aade2b5a20 kernel: offsets: exclude from coverage
None of this is runtime code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-06-25 17:22:34 -07:00
Ioannis Glaropoulos a6cb8b06db kernel: introduce k_float_disable system call
We introduce k_float_disable() system call, to allow threads to
disable floating point context preservation. The system call is
to be used in FP Sharing Registers mode (CONFIG_FP_SHARING=y).

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-12 09:17:45 -07:00
Andy Ross a12f2d6666 kernel/smp: Rename smp_init()
This name collides with one in the bt subsystem, and wasn't named in
proper zephyrese anyway.

Fixes #16604

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-06-05 17:15:55 -04:00
Nicolas Pitre 58d839bc3c misc: memory address type conversions
The uintptr_t type is more appropriate to represent memory addresses
than u32_t.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2019-06-03 21:14:57 -04:00
Jakob Olesen c8708d9bf3 misc: Replace uses of __builtin_*_overflow() with <misc/math_extras.h>.
Use the new math_extras functions instead of calling builtins directly.

Change a few local variables to size_t after checking that all uses of
the variable actually expects a size_t.

Signed-off-by: Jakob Olesen <jolesen@fb.com>
2019-05-14 19:53:30 -05:00
Benoit Leforestier 9915b4ec4e C++: Fix compilation error "invalid conversion"
When some header are included into C++ source file, this kind of
compilations errors are generated:
error: invalid conversion from 'void*'
	to 'u32_t*' {aka 'unsigned int*'} [-fpermissive]

Signed-off-by: Benoit Leforestier <benoit.leforestier@gmail.com>
2019-05-03 14:27:07 -04:00
Ioannis Glaropoulos 873dd10ea4 kernel: mem_domain: update name/doc of API function for partition add
Update the name of mem-domain API function to add a partition
so that it complies with the 'z_' prefix convention. Correct
the function documentation.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-05-02 11:37:38 -04:00
Flavio Ceolin 4f99a38b06 arch: all: Remove not used struct _caller_saved
The struct _caller_saved is not used. Most architectures put
automatically the registers onto stack, in others architectures the
exception code does it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Flavio Ceolin d61c679d43 arch: all: Remove legacy code
The struct _kernel_ach exists only because ARC' s port needed it, in
all other ports this was defined as an empty struct. Turns out that
this struct is not required even for ARC anymore, this is a legacy
code from nanokernel time.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-04-18 12:24:56 -07:00
Patrik Flykt 7c0a245d32 arch: Rename reserved function names
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-04-03 17:31:00 -04:00
Andrew Boie 526807c33b userspace: add const qualifiers to user copy fns
The source data is never modified.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-29 22:21:16 -04:00
Patrik Flykt 21358baa72 all: Update unsigend 'U' suffix due to multiplication
As the multiplication rule is updated, new unsigned suffixes
are added in the code.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Patrik Flykt 24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Flavio Ceolin 2df02cc8db kernel: Make if/iteration evaluate boolean operands
Controlling expression of if and iteration statements must have a
boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 22:06:45 -04:00
Flavio Ceolin 2ecc7cfa55 kernel: Make _is_thread_prevented_from_running return a bool
This function was returning an essentially boolean value. Just changing
the signature to return a bool.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Andy Ross e59d19628d kernel/sched: Rework prio validity assertion
This is throwing errors in static analysis, complaining that comparing
that a prior is higher and lower is impossible.  That is wrong per my
eyes (I swear I think it might be cueing off the names of the
functions, which invert "higher" and "lower" to match our reversed
priority numbers).

But frankly this was never a very readable macro to begin with.
Refactor to put the bounds into the term, so the static analyzer can
prove it locally, and add a build assertion to catch any errors (there
are none currently) where the low<->high priority range is invalid.

Long term, we should probably remove this macro, it doesn't provide
much value.  But removing it in response to a static analysis failure
is... not very responsible as a development practice.

Fixes #14816
Fixes #14820

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-23 09:53:55 -05:00
Andrew Boie 7ea211256e userspace: properly namespace handler functions
Now prefixed with z_hdlr_ instead of just hdlr_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:23:11 -07:00
Andrew Boie 50be938be5 userspace: renamespace some internal macros
These private macros are now all prefixed with Z_.

Fixes: #14447

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-18 09:23:11 -07:00
Ioannis Glaropoulos cac20e91d8 kernel: userspace: correct documentation for Z_SYSCALL_MEMORY_ macros
Corrections in the documentation of arguments in
Z_SYSCALL_MEMORY, Z_SYSCALL_MEMORY_READ, and
Z_SYSCALL_MEMORY_WRITE macros.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-13 15:36:15 -07:00
Ioannis Glaropoulos c686dd5064 kernel: enhance documentation of z_arch_buffer_validate
This commit enhances the documentation of z_arch_buffer_validate
describing the cases where the validation is performed
successfully, as well as the cases where the result is
undefined.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-03-13 15:36:15 -07:00
Andy Ross 42ed12a387 kernel/sched: arch/x86_64: Support synchronous k_thread_abort() in SMP
Currently thread abort doesn't work if a thread is currently scheduled
on a different CPU, because we have no way of delivering an interrupt
to the other CPU to force the issue.  This patch adds a simple
framework for an architecture to provide such an IPI, implements it
for x86_64, and uses it to implement a spin loop in abort for the case
where a thread is currently scheduled elsewhere.

On SMP architectures (xtensa) where no such IPI is implemented, we
fall back to waiting on an arbitrary interrupt to occur.  This "works"
for typical code (and all current tests), but of course it cannot be
guaranteed on such an architecture that k_thread_abort() will return
in finite time (e.g. the other thread on the other CPU might have
taken a spinlock and entered an infinite loop, so it will never
receive an interrupt to terminate itself)!

On non-SMP architectures this patch changes no code paths at all.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-03-13 19:15:20 +01:00
Patrik Flykt 4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Flavio Ceolin d9876be30c kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-05 14:58:58 -08:00
Andrew Boie 62fad96802 userspace: zero app memory bss earlier
Some init tasks may use some bss app memory areas and
expect them to be zeroed out. Do this much earlier
in the boot process, before any of the init tasks
run.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-05 08:27:20 -05:00
Andrew Boie 6dc3fd8e50 userspace: fix x86 issue with adding partitions
On x86, if a supervisor thread belonging to a memory domain
adds a new partition to that domain, subsequent context switches
to another thread in the same domain, or dropping itself to user
mode, does not have the correct setup in the page tables.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-03 23:44:13 -05:00
Andrew Boie f5951cd88f kernel: syscall_handler: get rid of stdarg
We can just implement this as a macro and not needlessly
run afoul of MISRC-C rule 17.1

Fixes: #10012

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-25 13:42:03 -08:00
Andrew Boie 2cfeba8507 x86: implement interrupt stack trampoline
Upon hard/soft irq or exception entry/exit, handle transitions
off or onto the trampoline stack, which is the only stack that
can be used on the kernel side when the shadow page table
is active. We swap page tables when on this stack.

Adjustments to page tables are now as follows:

- Any adjustments for stack memory access now are always done
  to the user page tables

- Any adjustments for memory domains are now always done to
  the user page tables

- With KPTI, resetting a page now clears the present bit

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Andy Ross 1bf9bd04b1 kernel: Add _unlocked() variant to context switch primitives
These functions, for good design reason, take a locking key to
atomically release along with the context swtich.  But there's still a
common pattern in code to do a switch unconditionally by passing
irq_lock() directly.  On SMP that's a little hurtful as it spams the
global lock.  Provide an _unlocked() variant for
_Swap/_reschedule/_pend_curr for simplicity and efficiency.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andrew Boie 4b4f773484 libc: set up memory partitions
* Newlib now defines a special z_newlib_partition containing
  all globals relevant to newlib. Most of these are in libc.a
  with a heap tracking variable in newlib's hooks.

* Both C libraries now expose a k_mem_partition containing the
  bounds of the malloc heap arena. Threads that want to use
  libc malloc() will need to add this to their memory domain.

* z_newlib_get_heap_bounds has been removed, in favor of the
  memory partition for the heap arena

* ztest now includes the C library partitions in its memory
  domain.

* The mem_alloc test now runs in user mode to prove that this
  all works for both C libraries.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andy Ross b2791b0ac8 kernel/sched: Force inlining of some routines within the scheduler guts
GCC 6.2.0 is making frustratingly poor inlining decisions with some of
these routines, resulting in an awful lot of runtime calls for code
that is only ever expanded once or twice within the file.

Treat with targetted ALWAYS_INLINE's to force the issue.  The
scheduler code is a hot path.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Peter A. Bigot b4ece0ad44 kernel: timeout: detect inactive timeouts using dnode linked state
Whether a timeout is linked into the timeout queue can be determined
from the corresponding sys_dnode_t linked state.  This removes the need
to use a special flag value in dticks to determine that the timeout is
inactive.

Update _abort_timeout to return an error code, rather than the flag
value, when the timeout to be aborted was not active.

Remove the _INACTIVE flag value, and replace its external uses with an
internal API function that checks whether a timeout is inactive.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Peter A. Bigot 25fbe7b60d kernel: timeout: remove local fix for double-remove
Use the new generic capability to detect unlinked sys_dnode_t instances.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Andy Ross 762ff2f428 kernel/swap: Simply/robustify return value handling
The call to _arch_switch is a giant screaming sign inviting optimizer
bugs.  The code that appears before is what happened long ago when we
were switched out, but the version that EXECUTED just now is actually
in a different thread.  So the assignment to _current before the
switch actually assigned OUR thread (the "new_thread" of the old
context!) to _current.

But obviously the optimizer looks at that code and assumes that the
_current which got assigned to the thread we were switching to long
ago is still correct, and used it when retrieving the swap return
value.

Obviously the real bug here is that the _arch_switch() in question
lacked a memory clobber (and it's getting one).

But we can remove two lines, remove code from inside the interrupt
lock and make the implementation more robust by moving the read to
after the irq_unlock() (which generally also has a memory clobber).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Flavio Ceolin 76b3518ce6 kernel: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Andrew Boie 74f114caef userspace: easy checking for specific driver
In general driver system calls are implemented at a subsystem
layer. However, some drivers may have capabilities specific to
the hardware not covered by the subsystem API. Such drivers may
want to define their own system calls.

This macro makes it simple to validate in the driver-specific
system call handlers that not only does the untrusted device
pointer correspond to the expected subsystem, initialization
state, and caller permissions, but also that the device object
is an instance of a specific driver (and not just any driver in
that subsystem).

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-12-27 20:31:58 -05:00
Flavio Ceolin b7287ceb4e kernel: syscall: Object validation checks boolean statement
The function that checks if an object is valid is essentially a boolean
function. Just changing its return type to reflect it.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-30 08:05:11 -08:00
Pawel Dunaj baea22407d kernel: Always set clock expiry with sync with timeout module
System must not set the clock expiry via backdoor as it may
effect in unbound time drift of all scheduled timeouts.

Fixes: #11502

Signed-off-by: Pawel Dunaj <pawel.dunaj@nordicsemi.no>
2018-11-26 12:24:59 +01:00
Andrew Boie 42cfd4ff26 kernel: expose k_busy_wait() to user mode
If we just had the kernel's implementation, we could
just move this to lib/, but possible arch-specific
implementations dictate that we just make this a
syscall.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-15 16:20:36 -05:00
Flavio Ceolin a406b88fca kernel: Remove duplicated identifier
There was an struct and a variable called _kernel. This is error prone
and a MISRA-C violation. It is changing the struct to have a unique
identifier.

MISRA-C rule 5.8

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-04 11:37:24 -05:00
Flavio Ceolin a3dddedab6 kernel: Use distinct macro names
There is a struct and a macro called _ready_q, this is error
prone. Just removing it.

MISRA-C rule 5.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-31 19:43:47 -04:00
Marek Pieta e87193896a subsys: debug: tracing: Fix thread tracing
Change fixes issue with thread execution tracing.

Signed-off-by: Marek Pieta <Marek.Pieta@nordicsemi.no>
2018-10-29 22:09:12 -04:00
Adithya Baglody 6176692f4b kernel: ksched.h: Incorrect argument type in _pend_current_thread
In _pend_current_thread the argument key is always a unsigned
interger type and this function forces it to become a signed
interger. This is a dangerous behavior and cant be trusted to
work as expected.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 12:17:58 -04:00
Paul Sokolovsky b779ea2d19 kernel: syscall_handler.h: Typo fix in docstring
Should be "fails" instead of "files".

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-10-17 10:32:10 -04:00
Adithya Baglody 1424561252 kernel: sched: Fixed incorrect argument type of _reschedule()
This API shouldn't take a int type but instead it should take
u32_t. This argument has to be similar to irq_lock() and
irq_unlock().

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-10-17 07:59:51 -04:00
Andy Ross cfe62038d2 kernel: Checkpatch fixups
I was pretty careful, but these snuck in.  Most of them are due to
overbroad string replacements in comments.  The pull request is very
large, and I'm too lazy to find exactly where to back-merge all of
these.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 987c0e5fc1 kernel: New timeout implementation
Now that the API has been fixed up, replace the existing timeout queue
with a much smaller version.  The basic algorithm is unchanged:
timeouts are stored in a sorted dlist with each node nolding a delta
time from the previous node in the list; the announce call just walks
this list pulling off the heads as needed.  Advantages:

* Properly spinlocked and SMP-aware.  The earlier timer implementation
  relied on only CPU 0 doing timeout work, and on an irq_lock() being
  taken before entry (something that was violated in a few spots).
  Now any CPU can wake up for an event (or all of them) and everything
  works correctly.

* The *_thread_timeout() API is now expressible as a clean wrapping
  (just one liners) around the lower-level interface based on function
  pointer callbacks.  As a result the timeout objects no longer need
  to store backpointers to the thread and wait_q and have shrunk by
  33%.

* MUCH smaller, to the tune of hundreds of lines of code removed.

* Future proof, in that all operations on the queue are now fronted by
  just two entry points (_add_timeout() and z_clock_announce()) which
  can easily be augmented with fancier data structures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 52e444bc05 kernel: Move timeout_remaining API
_timeout_remaining_get() was a function on a struct _timeout, doing
iteration on the timeout list, but it was defined in timer.c (the
higher level abstraction).

Move it to where it belongs.  Also have it return ticks instead of ms
to conform to scheme in the rest of the timeout API.  And rename it to
a more standard zephyr name.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross fe82f1c2af kernel/timeout: Refactor API
Add the callback parameter to add_timeout(), and remove the thread
argument.  Now the "low level" timeout API can be expressed without
reference to threads.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 5d203523b6 kernel/timeout: Eliminate wait_q parameters from API
Now that this is known to be an unused value, remove it from the API.
Note that this caught a few spots where we were passing values (a
non-NULL wait_q with a NULL thread handle) that were always being
ignored before.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 2ae8f50936 kernel/include: Move stubs for timeout functions to their declarations
The timeout_q.h scheme, where it declared real functions, but the
stubs for when there was no clock were in wait_q.h was senselessly
weird.  Put them in the same file.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 9098a45c84 kernel: New timeslicing implementation
Instead of checking every time we hit the low-level context switch
path to see if the new thread has a "partner" with which it needs to
share time, just run the slice timer always and reset it from the
scheduler at the points where it has already decided a switch needs to
happen.  In TICKLESS_KERNEL situations, we pay the cost of extra timer
interrupts at ~10Hz or whatever, which is low (note also that this
kind of regular wakeup architecture is required on SMP anyway so the
scheduler can "notice" threads scheduled by other CPUs).  Advantages:

1. Much simpler logic.  Significantly smaller code.  No variance or
   dependence on tickless modes or timer driver (beyond setting a
   simple timeout).

2. No arch-specific assembly integration with _Swap() needed

3. Better performance on many workloads, as the accounting now happens
   at most once per timer interrupt (~5 Hz) and true rescheduling and
   not on every unrelated context switch and interrupt return.

4. It's SMP-safe.  The previous scheme kept the slice ticks as a
   global variable, which was an unnoticed bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 8b54953e4b kernel/sys_clock: Fix build when !SYS_CLOCK_EXISTS
This got broken.  Add some #ifery to handle the case.  Not clean, will
clean up in a future pass once the API is final.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 1c08aefe56 kernel/timeoutq: Uninline the timeout methods
There was no good reason to have these rather large functions in a
header.  Put them into sys_clock.c for now, pending rework to the
system.

Now the API is clearly visible in a small header.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross 722a888ef7 timer: Clean up hairy tickless APIs
The tickless driver had a bunch of "hairy" APIs which forced the timer
drivers to do needless low-level accounting for the benefit of the
kernel, all of which then proceeded to implement them via cut and
paste.  Specifically the "program_time" calls forced the driver to
expose to the kernel exactly when the next interrupt was due and how
much time had elapsed, in a parallel API to the existing "what time is
it" and "announce a tick" interrupts that carry the same information.

Remove these from the kernel, replacing them with synthesized logic
written in terms of the simpler APIs.

In some cases there will be a performance impact due to the use of the
64 bit uptime call, but that will go away soon.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Andy Ross ab488277bc drivers/timer: Unify timeout setting APIs
The existing API had two almost identical functions: _set_time() and
_timer_idle_enter().  Both simply instruct the timer driver to set the
next timer interrupt expiration appropriately so that the call to
z_clock_announce() will be made at the requested number of ticks.  On
most/all hardware, these should be implementable identically.

Unfortunately because they are specified differently, existing drivers
have implemented them in parallel.

Specify a new, unified, z_clock_set_timeout().  Document it clearly
for implementors.  And provide a shim layer for legacy drivers that
will continue to use the old functions.

Note that this patch fixes an existing bug found by inspection: the
old call to _set_time() out of z_clock_announce() failed to test for
the "wait forever" case in the situation where clock_always_on is
true, meaning that a system that reached this point and then never set
another timeout would freeze its uptime clock incorrectly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Flavio Ceolin 18af4c6299 kernel: Fix overflow test problem introduced in 92ea2f9
The builtin function __builtin_umul_overflow returns a boolean and
should not checked as an integer.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-04 05:20:29 -07:00
Flavio Ceolin 6fc84feaf2 kernel: syscalls: Change handlers namespace
According C99 the first 31 characters of an identifier must be unique.
Shortening the namespace of the generated objects to achieve it.

C99 - 5.2.4.1
MISRA-C rule 5.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 07:58:19 +05:30
Flavio Ceolin ea716bf023 kernel: Explicitly comparing pointer with NULL
MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin 92ea2f9189 kernel: Calling Z_SYSCALL_VERIFY_MSG with boolean expressions
Explicitly making a boolean expression when calling
Z_SYSCALL_VERIFY_MSG macro.

MISRA-C rule: 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin 02ed85bd82 kernel: sched: Change boolean APIs to return bool
Change APIs that essentially return a boolean expression  - 0 for
false and 1 for true - to return a bool.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Flavio Ceolin 6fdc56d286 kernel: Using boolean types for boolean constants
Make boolean expressions use boolean types.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 06:28:41 +05:30
Anas Nashif 57554055d2 kernel: add a new API for setting thread names
Added k_thread_name_set() and enable thread name setting when declaring
static threads. This is enabled only when THREAD_MONITOR is used. System
threads get a name by default.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-09-27 08:58:55 +05:30
Daniel Leung 7228a60173 kernel: Fix compilation errors when CONFIG_TIMESLICING=n
Add ifdef guard to the z_reset_timeslice() to fix compilation
errors when CONFIG_TIMESLICING is disabled.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2018-09-25 12:54:58 +05:30
Findlay Feng 3c834bdf27 kernel: Fix list-node add again corruption case in timeout handling
The node of the timeout temporary list cannot be continued
to index the next node after being added again.

Signed-off-by: Findlay Feng <i@fengch.me>
2018-09-21 13:29:09 -04:00
Flavio Ceolin b3d9202704 kernel: Using boolean constants instead of 0 or 1
MISRA C requires that every controlling expression of and if or while
statement have a boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-18 13:57:15 -04:00
Sebastian Bøe 878a0f050e ld: Put 'sizeof(struct device)' in the generated offsets header
Rename _DEVICE_STRUCT_SIZE to _DEVICE_STRUCT_SIZEOF. This causes it to
be picked by the script 'gen_offset_header.py' and inserted into the
header file 'include/generated/offsets.h'.

Renaming from x_SIZE to x_SIZEOF will align it's name with the other
symbols that denote a sctruct's size, like K_THREAD_SIZEOF.

Furthermore, it will allow the symbol to be accessed through a header
file define, instead of only as an extern symbol. This is more
flexible, and more aligned with the other symbols in offsets.

Finally, if we are able to move all of offsets.c symbols into the
offsets.h header file we be able to remove offsets.o from the link and
thereby simplify the linking process.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-09-18 16:23:40 +02:00
Flavio Ceolin a7fffa9e00 headers: Fix headers guards
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.

With have *many* violations on Zephyr's code, this commit is tackling
only the violations caused by headers guards. It also takes the
opportunity to normalize them using the filename in uppercase and
replacing dot with underscore. e.g file.h -> FILE_H

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-17 15:49:26 -04:00
Flavio Ceolin 98c64b6d92 kernel: Change _reschedule signature
_reschedule return's value is not used anywhere, except erroneously by
pthread_barrier_wait.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Flavio Ceolin 8a9ba10c2c kernel: swap: Fix __swap signature
__swap function was returning -EAGAIN in some case, though its return
value was declared as unsigned int.

This commit changes this function to return int since it can return a
negative value and its return was already been propagate as int.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Andy Ross 9ecc4ead68 sched: Properly account for timeslicing in tickless mode
When adding a new runnable thread in tickless mode, we need to detect
whether it will timeslice with the running thread and reset the timer,
otherwise it won't get any CPU time until the next interrupt fires at
some indeterminate time in the future.

This fixes the specific bug discussed in #7193, but the broader
problem of tickless and timeslicing interacting badly remains.  The
code as it exists needs some rework to avoid all the #ifdef mess.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-29 10:01:41 -04:00
Anas Nashif 0e07f8e97a Revert "sched: Properly account for timeslicing in tickless mode"
This reverts commit bc6fb65c81.

Causes MPU faults on multiple platforms.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-27 18:39:51 -04:00
Andy Ross bc6fb65c81 sched: Properly account for timeslicing in tickless mode
When adding a new runnable thread in tickless mode, we need to detect
whether it will timeslice with the runnable thread and reset the
timer, otherwise it won't get any CPU time until the next interrupt
fires at some indeterminate time in the future.

This fixes the specific bug discussed in #7193, but the broader
problem of tickless and timeslicing interacting badly remains.  The
code as it exists needs some rework to avoid all the #ifdef mess.

Note that the patch also moves _ready_thread() from a ksched.h inline
to sched.c.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-27 13:19:29 -04:00
Andy Ross d8d5ec3f91 kernel: Fix double-list-removal corruption case in timeout handling
This fixes #8669, and is distressingly subtle for a one-line patch:

The list iteration code in _handle_expired_timeouts() would remove the
timeout from our (temporary -- the dlist header is on the stack of our
calling function) list of expired timeouts before invoking the
handler.  But sys_dlist_remove() only fixes up the containing list
pointers, leaving garbage in the node.  If the action of that handler
is to re-add the timeout (which is very common!) then that will then
try to remove it AGAIN from the same list.

Even then, the common case is that the expired list contains only one
item, so the result is a perfectly valid empty list that affects
nothing.  But if you have more than one, you get a corrupt cycle in
the iteration list and things get weird.

As it happens, there's no value in trying to remove this timeout from
the temporary list at all.  Just iterate over it naturally.

Really, this design is fragile: we shouldn't be reusing the list nodes
in struct _timeout for this purpose and should figure out some other
mechanism.  But this fix should be good for now.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-08-26 19:39:52 -07:00
Anas Nashif 483910ab4b systemview: add support natively using tracing hooks
Add needed hooks as a subsystem that can be enabled in any application.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif a2248782a2 kernel: event_logger: remove kernel_event_logger
Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Adithya Baglody bb918d85f8 tests: benchmarks: timing_info: Enable benchmarks for xtensa.
This patch provides support needed to get timing related
information from xtensa based SOC.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-08-20 06:51:25 -07:00
Flavio Ceolin 8aec087268 kernel: Fix bitwise operators with unsigned operators
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Flavio Ceolin ec462f872c kernel: Remove unused definition
_thread definition is not used, just removing it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Sebastian Bøe 1186f5bb29 cmake: Deprecate the 2 symbols _SYSCALL_{LIMIT,BAD}
There exist two symbols that became equivalent when PR #9383 was
merged; _SYSCALL_LIMIT and K_SYSCALL_LIMIT. This patch deprecates the
redundant _SYSCALL_LIMIT symbol.

_SYSCALL_LIMIT was initally introduced because before PR #9383 was
merged K_SYSCALL_LIMIT was an enum, which couldn't be included into
assembly files. PR #9383 converted it into a define, which can be
included into assembly files, making _SYSCALL_LIMIT redundant.

Likewise for _SYSCALL_BAD.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-08-15 11:46:51 -07:00
Andrew Boie 83fda7c68f userspace: add _k_object_recycle()
This is used to reset the permissions on an object while
also initializing it.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-08-13 07:19:39 -07:00
Andrew Boie c8188f6722 userspace: add functions for copying to/from user
We now have functions for handling all the details of copying
data to/from user mode, including C strings and copying data
into resource pool allocations.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-31 07:47:15 -07:00
Andrew Boie 1f2eedff18 kernel: add z_arch_user_string_nlen prototype
This is used to measure the length of potentially unsafe
strings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-31 07:47:15 -07:00
Andy Ross 9f06a35450 kernel: Add the old "multi queue" scheduler algorithm as an option
Zephyr 1.12 removed the old scheduler and replaced it with the choice
of a "dumb" list or a balanced tree.  But the old multi-queue
algorithm is still useful in the space between these two (applications
with large-ish numbers of runnable threads, but that don't need fancy
features like EDF or SMP affinity).  So add it as a
CONFIG_SCHED_MULTIQ option.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-07-03 17:09:15 -04:00
Andy Ross 225c74bbdf kernel/Kconfig: Reorgnize wait_q and sched algorithm choices
Make these "choice" items instead of a single boolean that implies the
element unset.

Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-07-03 17:09:15 -04:00
Andy Ross 55a7e46b66 kernel/poll: Remove POLLING thread state bit
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state".  It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.

Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll.  The disadvantage is the 4
bytes of thread space needed.  Advantages:

+ Cleaner API, it's now internal to poll instead of being globally
  visible.

+ The thread_state bit space is just one byte, and was almost full
  already.

+ Smaller code to write/test a full word and not a bitfield

+ Words are atomic, so no need for one of irq lock/unlock pairs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:25:38 -04:00
Andrew Boie 2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Andy Ross eace1df539 kernel/sched: Fix SMP scheduling
Recent changes post-scheduler-rewrite broke scheduling on SMP:

The "preempt_ok" feature added to isolate preemption points wasn't
honored in SMP mode.  Fix this by adding a "swap_ok" field to the CPU
record (not the thread) which is set at the same time out of
update_cache().

The "queued" flag wasn't being maintained correctly when swapping away
from _current (it was added back to the queue, but the flag wasn't
set).

Abstract out a "should_preempt()" predicate so SMP and uniprocessor
paths share the same logic, which is distressingly subtle.

There were two places where _Swap() was predicated on
_get_next_ready_thread() != _current.  That's no longer a benign
optimization in SMP, where the former function REMOVES the next thread
from the queue.  Just call _Swap() directly in SMP, which has a
unified C implementation that does this test already.  Don't change
other architectures in case it exposes bugs with _Swap() switching
back to the same thread (it should work, I just don't want to break
anything).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-31 14:02:03 -04:00
Andrew Boie 538754cb28 kernel: handle early entropy issues
We generalize querying the entropy driver directly with
a new internal API, which is now used by CONFIG_STACK_RANDOM
and stack canary initialization.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-23 19:38:06 -07:00
Kumar Gala 177bbbd35f kernel: Fix trivial typo in CONFIG_WAIT_Q_FAST
The Kconfig option is CONFIG_WAITQ_FAST not CONFIG_WAIT_Q_FAST.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-05-23 17:57:06 -04:00
Andy Ross 4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross 1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross c0ba11b281 kernel: Don't _arch_switch() to yourself
The SMP testing missed the case where _Swap() decides to return back
into the _current.  Obviously there is no valid switch handle for the
running thread into which we can restore, and everything blows up.
(What happened is that the new scheduler code opened up a spot where
k_thread_priority_set() does a _reschedule() unconditionally and
doens't check to see whether or not it's needed like the old code).

But that isn't incorrect!  It's entirely possible that _Swap() may
find that no thread is runnable except _current (due, for example, to
another CPU racing the other thread you expected off to sleep or
something).  Don't blow up, check and return a noop.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andy Ross 4ca0e07088 kernel: Add _unpend_all convenience wrapper to scheduler API
Refactoring.  Mempool wants to unpend all threads at once.  It's
cleaner to do this in the scheduler instead of the IPC code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 42a2c96422 newlib: fix heap user mode access for MPU devices
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.

MPU devices that don't have these restrictions allocate
the heap as normal.

In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.

Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.

On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.

Fixes: #6814

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-10 15:09:02 -07:00
Andy Ross 15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Leandro Pereira c200367b68 drivers: Perform a runtime check if a driver is capable of an operation
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.

Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.

Fixes #6907.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-26 02:57:12 +05:30
Andy Ross e7ded11a2e kernel: Prune ksched.h of dead code
There was a ton of junk in this header.  Pare it down to just the
stuff actually used by code outside of sched.c, move the needed
internal stuff into sched.c itself, and drop everything else.

Note that (other than the tiny inlines that remain here in the header)
the scheduler interface exposed to the rest of the system is now
composed of just 12 functions.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-25 13:13:23 -07:00
Andy Ross 8a4b2e8cf2 kernel, posix: Move ready_one_thread() to scheduler
The POSIX layer had a simple ready_one_thread() utility.  Move this to
the scheduler API (with a prepended underscore -- it's an internal
API) so that it can be synchronized along with the rest of the
scheduler.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross e0a572beeb kernel: Refactor, unifying _pend_current_thread() + _Swap() idiom
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps.  Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Leandro Pereira 541c3cb18b kernel: sched: Fix validation of priority levels
A priority value cannot be simultaneously higher than the maximum
possible value and smaller than the minimum value.  Rewrite the
_VALID_PRIO() macro as a function so that this if either of these
invariants are invalid, the priority is considered invalid.

Coverity-CID: 182584
Coverity-CID: 182585
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-21 08:39:42 -07:00
Anas Nashif c7f5cc9bcb license: fix spdx identifier in a few files
Use correct SPDX identifier for Apache 2.0.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-04-12 15:19:51 -04:00
Kristian Klomsten Skordal c39e2a2d6c kernel: Fix left shift into sign bit
The result of left shifting a bit into the sign-bit is undefined
behavior. This makes the offending shift operation unsigned.

Signed-off-by: Kristian Klomsten Skordal <kristian.skordal@nordicsemi.no>
2018-03-22 19:16:17 -04:00
Andy Ross 85bc0a3fe6 kernel: Cleanup, unify _add_thread_to_ready_q() and _ready_thread()
The scheduler exposed two APIs to do the same thing:
_add_thread_to_ready_q() was a low level primitive that in most cases
was wrapped by _ready_thread(), which also (1) checks that the thread
_is_ready() or exits, (2) flags the thread as "started" to handle the
case of a thread running for the first time out of a waitq timeout,
and (3) signals a logger event.

As it turns out, all existing usage was already checking case #1.
Case #2 can be better handled in the timeout resume path instead of on
every call.  And case #3 was probably wrong to have been skipping
anyway (there were paths that could make a thread runnable without
logging).

Now _add_thread_to_ready_q() is an internal scheduler API, as it
probably always should have been.

This also moves some asserts from the inline _ready_thread() wrapper
to the underlying true function for code size reasons, otherwise the
extra use of the inline added by this patch blows past code size
limits on Quark D2000.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andy Ross 9d367eeb0a xtensa, kernel/sched: Move next switch_handle selection to the scheduler
The xtensa asm2 layer had a function to select the next switch handle
to return into following an exception.  There is no arch-specific code
there, it's just scheduler logic.  Move it to the scheduler where it
belongs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andy Ross 28192fd8ea kernel/kswap.h: Hook event logger from switch-based _Swap
The new generic _Swap() forgot the event logger hook

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 564f59060c kernel: SMP timer integration
In SMP, the system timer is used for timeslicing on auxiliary CPUs,
but the base system timekeeping via _nano_sys_clock_tick_announce() is
still done on CPU0 only (because the framework isn't prepared for
asynchronous notification yet).  Skip processing on CPU1+.

Also, due to a hardware interaction* that is difficult to work around,
timer initialization on the auxiliary CPUs is done at the very end of
the CPU bringup, just before the swap into the scheduler.  A
smp_timer_init() API has been added for this purpose.

* On ESP-32, enabling the timer seems to result in a near-synchronous
  interrupt being delivered despite my best attempts to keep it
  masked, then blowing things up because the CPU record isn't set up
  to handle it yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bdcd18a744 kernel: Enable SMP
Now that all the pieces are in place, enable SMP for real:

Initialize the CPU records, launch the CPUs at the end of kernel
initialization, have them wait for a flag to release them into the
scheduler, then enter into the runnable threads via _Swap().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross e694656345 kernel: Move per-cpu _kernel_t fields into separate struct
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these.  Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.

When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record.  This permits compatibility with legacy
assembly on other platforms.  Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.

Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2c1449bc81 kernel, xtensa: Switch-specific thread return value
When using _arch_switch() context switching, the thread return value
is a generic hook and not provided by the architecture.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 042d8ecca9 kernel: Add alternative _arch_switch context switch primitive
The existing __swap() mechanism is too high level for some
applications because of its scheduler-awareness.  This introduces a
new _arch_switch() mechanism, which is a simpler primitive that looks
like:

    void _arch_switch(void *handle, void **old_handle_out);

The new thread handle (typically just a stack pointer) is specified
explicitly instead of being picked up from the scheduler by
per-architecture code, and on return the "old" thread handle that got
switched out is returned through the pointer.

The new primitive (currently available only on xtensa) is selected
when CONFIG_USE_SWITCH is "y".  A new C _Swap() implementation based
on this primitive is then added which operates compatibly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 8ac9c082e6 kernel: Move some macros
K_NUM_PRIORITIES and K_NUM_PRIO_BITMAPS were defined in
nano_internal.h, but used in only a handful of places.  Move to
kernel_structs.h (somewhat higher up in the hierarchy) to help with
include file cycle-breaking.  Arguably they are a better fit there
anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Ramakrishna Pallala 301acb8e1b kernel: include: rename nano_internal.h to kernel_internal.h
Rename the nano_internal.h to kernel_internal.h and modify the
header file name accordingly wherever it is used.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-01-31 10:07:21 -06:00
Adithya Baglody 13ac4d4264 kernel: mem_domain: Add an arch interface to configure memory domain
Add an architecure specfic code for the memory domain
configuration. This is needed to support a memory domain API
k_mem_domain_add_thread.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-12-21 11:52:27 -08:00
Andrew Boie 9f38d2a91a kernel: have k_sched_lock call _sched_lock
Having two implementations of the same thing is bad,
especially when one can just call the other inline version.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-17 17:42:54 -05:00
Adithya Baglody 57832073c6 kernel: arch interface for memory domain
Additional arch specific interfaces to handle memory domain
destroy and single partition removal.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Andrew Boie 818a96d3af userspace: assign thread IDs at build time
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:

* Threads can be granted permissions on kernel objects before the
  thread is initialized. Previously, it was necessary to call
  k_thread_create() with a K_FOREVER delay, assign permissions, then
  start the thread. Permissions are still completely cleared when
  a thread exits.

* No need for runtime logic to manage thread IDs

* Build error if CONFIG_MAX_THREAD_BYTES is set too low

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-03 11:29:23 -07:00
Anas Nashif 780324b8ed cleanup: rename fiber/task -> thread
We still have many places talking about tasks and threads, replace those
with thread terminology.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-10-30 18:41:15 -04:00
Andrew Boie 98bf5234dc Revert "kernel: arch interface for memory domain"
This reverts commit 9bbe7bd61e.
2017-10-20 15:02:59 -04:00
Adithya Baglody 9bbe7bd61e kernel: arch interface for memory domain
Additional arch specific interfaces to handle memory domain
destroy and single partition removal.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-20 10:39:51 -07:00
David B. Kinder 4600c37ff1 doc: Fix misspellings in header/doxygen comments
Occasional scan for misspellings missed during PR reviews

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2017-10-17 19:40:29 -04:00
Andrew Boie c5c104f91e kernel: fix k_thread_stack_t definition
Currently this is defined as a k_thread_stack_t pointer.
However this isn't correct, stacks are defined as arrays. Extern
references to k_thread_stack_t doesn't work properly as the compiler
treats it as a pointer to the stack array and not the array itself.

Declaring as an unsized array of k_thread_stack_t doesn't work
well either. The least amount of confusion is to leave out the
pointer/array status completely, use pointers for function prototypes,
and define K_THREAD_STACK_EXTERN() to properly create an extern
reference.

The definitions for all functions and struct that use
k_thread_stack_t need to be updated, but code that uses them should
be unchanged.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-17 08:24:29 -07:00
Andrew Boie a2b40ecfaf userspace handlers: finer control of init state
We also need macros to assert that an object must be in an
uninitialized state. This will be used for validating thread
and stack objects to k_thread_create(), which must not be already
in use.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 19:02:00 -07:00
Andrew Boie 04caa679c9 userspace: allow thread IDs to be re-used
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.

Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.

Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie 885fcd5147 userspace: de-initialize aborted threads
This will allow these thread objects to be re-used.

_mark_thread_as_dead() removed, it was only being called in one
place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie 4a9a4240c6 userspace: add _k_object_uninit()
API to assist with re-using objects, such as terminated threads or
kernel objects returned to a pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Leandro Pereira 6f99bdb02a kernel: Provide only one _SYSCALL_HANDLER() macro
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros.  Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-10-16 13:42:15 -04:00
Andrew Boie a89bf01192 kernel: add k_object_access_revoke() system call
Does the opposite of k_object_access_grant(); the provided thread will
lose access to that kernel object.

If invoked from userspace the caller must hace sufficient access
to that object and permission on the thread being revoked access.

Fix documentation for k_object_access_grant() API to reflect that
permission on the thread parameter is needed as well.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-13 15:08:40 -07:00
Andrew Boie 47f8fd1d4d kernel: add K_INHERIT_PERMS flag
By default, threads are created only having access to their own thread
object and nothing else. This new flag to k_thread_create() gives the
thread access to all objects that the parent had at the time it was
created, with the exception of the parent thread itself.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-13 12:17:13 -07:00
Andrew Boie 225e4c0e76 kernel: greatly simplify syscall handlers
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.

- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
  other than object verification

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 7e3d3d782f kernel: userspace.c code cleanup
- Dumping error messages split from _k_object_validate(), to avoid spam
  in test cases that are expected to have failure result.

- _k_object_find() prototype moved to syscall_handler.h

- Clean up k_object_access() implementation to avoid double object
  lookup and use single validation function

- Added comments, minor whitespace changes

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 38ac235b42 syscall_handler: handle multiplication overflow
Computing the total size of the array need to handle the case where
the product overflow a 32-bit unsigned integer.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 32a08a81ab syscall_handler: introduce new macros
Instead of boolean arguments to indicate memory read/write
permissions, or init/non-init APIs, new macros are introduced
which bake the semantics directly into the name of the macro.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 231b95cfc0 syscalls: add _SYSCALL_VERIFY_MSG()
Expecting stringified expressions to be completely comprehensible to end
users is wishful thinking; we really need to express what a failed
system call verification step means in human terms in most cases.

Memory buffer and kernel object checks now are implemented in terms of
_SYSCALL_VERIFY_MSG.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie cee72411e4 userspace: move _k_object_validate() definition
This API only gets used inside system call handlers and a specific test
case dedicated to it. Move definition to the private kernel header along
with the rest of the defines for system call handlers.

A non-userspace inline variant of this function is unnecessary and has
been deleted.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 468190a795 kernel: convert most thread APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Chunlin Han e9c9702818 kernel: add memory domain APIs
Add the following application-facing memory domain APIs:

k_mem_domain_init() - to initialize a memory domain
k_mem_domain_destroy() - to destroy a memory domain
k_mem_domain_add_partition() - to add a partition into a domain
k_mem_domain_remove_partition() - to remove a partition from a domain
k_mem_domain_add_thread() - to add a thread into a domain
k_mem_domain_remove_thread() - to remove a thread from a domain

A memory domain would contain some number of memory partitions.
A memory partition is a memory region (might be RAM, peripheral
registers, flash...) with specific attributes (access permission,
e.g. privileged read/write, unprivileged read-only, execute never...).
Memory partitions would be defined by set of MPU regions or MMU tables
underneath.
A thread could only belong to a single memory domain any point in time
but a memory domain could contain multiple threads.
Threads in the same memory domain would have the same access permission
to the memory partitions belong to the memory domain.

The memory domain APIs are used by unprivileged threads to share data
to the threads in the same memory and protect sensitive data from
threads outside their domain. It is not only for improving the security
but also useful for debugging (unexpected access would cause exception).

Jira: ZEP-2281

Signed-off-by: Chunlin Han <chunlin.han@linaro.org>
2017-09-29 16:48:53 -07:00
Andrew Boie cbf7c0e47a syscalls: implicit cast for _SYSCALL_MEMORY
Everything get passed to handlers as u32_t, make it simpler to check
something that is known to be a pointer, like we already do with
_SYSCALL_IS_OBJ().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-29 15:43:30 -07:00
Andrew Boie fa94ee7460 syscalls: greatly simplify system call declaration
To define a system call, it's now sufficient to simply tag the inline
prototype with "__syscall" or "__syscall_inline" and include a special
generated header at the end of the header file.

The system call dispatch table and enumeration of system call IDs is now
automatically generated.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-29 13:02:20 -07:00
Andrew Boie 52563e3b09 syscall_handler.h: fix a typo
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-28 10:05:46 -07:00
Andrew Boie 13ca6fe284 syscalls: reorganize headers
- syscall.h now contains those APIs needed to support invoking calls
  from user code. Some stuff moved out of main kernel.h.
- syscall_handler.h now contains directives useful for implementing
  system call handler functions. This header is not pulled in by
  kernel.h and is intended to be used by C files implementing kernel
  system calls and driver subsystem APIs.
- syscall_list.h now contains the #defines for system call IDs. This
  list is expected to grow quite large so it is put in its own header.
  This is now an enumerated type instead of defines to make things
  easier as we introduce system calls over the new few months. In the
  fullness of time when we desire to have a fixed userspace/kernel ABI,
  this can always be converted to defines.

Some new code added:

- _SYSCALL_MEMORY() macro added to check memory regions passed up from
  userspace in handler functions
- _syscall_invoke{7...10}() inline functions declare for invoking system
  calls with more than 6 arguments. 10 was chosen as the limit as that
  corresponds to the largest arg list we currently have
  which is for k_thread_create()

Other changes

- auto-generated K_SYSCALL_DECLARE* macros documented
- _k_syscall_table in userspace.c is not a placeholder. There's no
  strong need to generate it and doing so would require the introduction
  of a third build phase.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-28 08:56:20 -07:00
Chunlin Han 95d28e53bb arch: arm: add initial support for CONFIG_USERSPACE
add related configs & (stub) functions for enabling
CONFIG_USERSPACE on arm w/o build errors.

Signed-off-by: Chunlin Han <chunlin.han@linaro.org>
2017-09-26 10:00:53 -07:00
Andrew Boie a23c245a9a userspace: flesh out internal syscall interface
* Instead of a common system call entry function, we instead create a
table mapping system call ids to handler skeleton functions which are
invoked directly by the architecture code which receives the system
call.

* system call handler prototype specified. All but the most trivial
system calls will implement one of these. They validate all the
arguments, including verifying kernel/device object pointers, ensuring
that the calling thread has appropriate access to any memory buffers
passed in, and performing other parameter checks that the base system
call implementation does not check, or only checks with __ASSERT().

It's only possible to install a system call implementation directly
inside this table if the implementation has a return value and requires
no validation of any of its arguments.

A sample handler implementation for k_mutex_unlock() might look like:

u32_t _syscall_k_mutex_unlock(u32_t mutex_arg, u32_t arg2, u32_t arg3,
                              u32_t arg4, u32_t arg5, void *ssf)
{
        struct k_mutex *mutex = (struct k_mutex *)mutex_arg;
        _SYSCALL_ARG1;

        _SYSCALL_IS_OBJ(mutex, K_OBJ_MUTEX, 0,  ssf);
        _SYSCALL_VERIFY(mutex->lock_count > 0, ssf);
        _SYSCALL_VERIFY(mutex->owner == _current, ssf);

        k_mutex_unlock(mutex);

        return 0;
}

* the x86 port modified to work with the system call table instead of
calling a common handler function. fixed an issue where registers being
changed could confuse the compiler has been fixed; all registers, even
ones used for parameters, must be preserved across the system call.

* a new arch API for producing a kernel oops when validating system call
arguments added. The debug information reported will be from the system
call site and not inside the handler function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-15 13:44:45 -07:00
Andrew Boie be6740ea77 kernel: define arch interface for memory domains
Based on work by Chunlin Han <chunlin.han@linaro.org>.
This defines the interfaces that architectures will need to implement in
order to support memory domains in either MMU or MPU hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-14 08:59:54 -07:00
Andrew Boie 2acfcd6b05 userspace: add thread-level permission tracking
Now creating a thread will assign it a unique, monotonically increasing
id which is used to reference the permission bitfield in the kernel
object metadata.

Stub functions in userspace.c now implemented.

_new_thread is now wrapped in a common function with pre- and post-
architecture thread initialization tasks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 5cfa5dc8db kernel: add K_USER flag and _is_thread_user()
Indicates that the thread is configured to run in user mode.
Delete stub function in userspace.c

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie f564986d2f kernel: add _k_syscall_entry stub
This is the kernel-side landing site for system calls. It's currently
just a stub.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 1f32d09bd8 kernel: specify arch functions for userspace
Any arches that support userspace will need to implement these
functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie f5adf534e8 kernel: declare interface for checking buffers
This will be used by system call handlers to ensure that any memory
regions passed in from userspace are actually accessible by the calling
thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 08:40:41 -07:00
Andrew Boie 1e06ffc815 zephyr: use k_thread_entry_t everywhere
In various places, a private _thread_entry_t, or the full prototype
were being used. Be consistent and use the same typedef everywhere.

Signen-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-11 11:18:22 -07:00
Youvedeep Singh d787e3c554 timer: k_timer_start should accept 0 as duration parameter.
k_timer_start(timer, duration, period) is API used to
start a timer. Currently duration parameters accepts
only positive number.
But a user may require to do some periodic activity
ASAP and start timer with 0 value. So this patch
allows 0 as minimum value of duration.
In this patch, when duration value is set as 0 then
timer expiration handler is called instead of submiting
this into timeout queue.

Jira: ZEP-2497

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-09-06 10:18:39 -07:00
Luiz Augusto von Dentz 87aa621915 kernel: Use SYS_DLIST_FOR_EACH_CONTAINER whenever possible
SYS_DLIST_FOR_EACH_CONTAINER is preferable over using
SYS_DLIST_FOR_EACH_NODE as that avoid casting directly which assumes the
node field is always at the beginning.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-25 09:08:50 -04:00
Andrew Boie 507852a4ad kernel: introduce opaque data type for stacks
Historically, stacks were just character buffers and could be treated
as such if the user wanted to look inside the stack data, and also
declared as an array of the desired stack size.

This is no longer the case. Certain architectures will create a memory
region much larger to account for MPU/MMU guard pages. Unfortunately,
the kernel interfaces treat both the declared stack, and the valid
stack buffer within it as the same char * data type, even though these
absolutely cannot be used interchangeably.

We introduce an opaque k_thread_stack_t which gets instantiated by
K_THREAD_STACK_DECLARE(), this is no longer treated by the compiler
as a character pointer, even though it really is.

To access the real stack buffer within, the result of
K_THREAD_STACK_BUFFER() can be used, which will return a char * type.

This should catch a bunch of programming mistakes at build time:

- Declaring a character array outside of K_THREAD_STACK_DECLARE() and
  passing it to K_THREAD_CREATE
- Directly examining the stack created by K_THREAD_STACK_DECLARE()
  which is not actually the memory desired and may trigger a CPU
  exception

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-08-01 16:43:15 -07:00
Andrew Boie ae1a75b82e stack_sentinel: change cooperative check
One of the stack sentinel policies was to check the sentinel
any time a cooperative context switch is done (i.e, _Swap is
called).

This was done by adding a hook to _check_stack_sentinel in
every arch's __swap function.

This way is cleaner as we just have the hook in one inline
function rather than implemented in several different assembly
dialects.

The check upon interrupt is now made unconditionally rather
than checking if we are calling __swap, since the check now
is only called on cooperative _Swap(). The interrupt is always
serviced first.

Issue: ZEP-2244
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-08 13:49:36 -05:00
Andrew Boie 3989de7e3b kernel: fix short time-slice reset
The kernel tracks time slice usage with the _time_slice_elapsed global.
Every time the timer interrupt goes off and the timer driver calls
_nano_sys_clock_tick_announce() with the elapsed time, this is added to
_time_slice_elapsed. If it exceeds the total time slice, the thread is
moved to the back of the queue for that priority level and
_time_slice_elapsed is reset to zero.

In a non-tickless kernel, this is the only time _time_slice_elapsed is
reset.  If a thread uses up a partial time slice, and then cooperatively
switches to another thread, the next thread will inherit the remaining
time slice, causing it not to be able to run as long as it ought to.

There does exist code to properly reset the elapsed count, but it was
only compiled in a tickless kernel. Now it is built any time
CONFIG_TIMESLICING is enabled.

Issue: ZEP-2107
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-02 14:47:01 -04:00
Maciek Borzecki ed016fa9a0 kernel: make sure that _thread_entry() declaration matches with definition
Fixes sparse warning:
  CHECK   <snip>/zephyr/kernel/thread.c
<snip>/zephyr/kernel/thread.c:184:20: error: symbol '_thread_entry' redeclared with different type (originally declared at <snip>/zephyr/kernel/include/nano_internal.h:43) - different modifiers
  CC      kernel/thread.o

Change-Id: I2223493cdf97c811c661773f8fd430e6c00cbaa0
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Andrew Boie 5dcb279df8 debug: add stack sentinel feature
This places a sentinel value at the lowest 4 bytes of a stack
memory region and checks it at various intervals, including when
servicing interrupts or context switching.

This is implemented on all arches except ARC, which supports stack
bounds checking directly in hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie 41c68ece83 kernel: publish offsets to thread stack info
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie d26cf2dc33 kernel: add k_thread_create() API
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.

This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.

By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.

Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-11 20:24:22 -04:00
Ramesh Thomas 89ffd44dfb kernel: tickless: Add tickless kernel support
Adds event based scheduling logic to the kernel. Updates
management of timeouts, timers, idling etc. based on
time tracked at events rather than periodic ticks. Provides
interfaces for timers to announce and get next timer expiry
based on kernel scheduling decisions involving time slicing
of threads, timeouts and idling. Uses wall time units instead
of ticks in all scheduling activities.

The implementation involves changes in the following areas

1. Management of time in wall units like ms/us instead of ticks
The existing implementation already had an option to configure
number of ticks in a second. The new implementation builds on
top of that feature and provides option to set the size of the
scheduling granurality to mili seconds or micro seconds. This
allows most of the current implementation to be reused. Due to
this re-use and co-existence with tick based kernel, the names
of variables may contain the word "tick". However, in the
tickless kernel implementation, it represents the currently
configured time unit, which would be be mili seconds or
micro seconds. The APIs that take time as a parameter are not
impacted and they continue to pass time in mili seconds.

2. Timers would not be programmed in periodic mode
generating ticks. Instead they would be programmed in one
shot mode to generate events at the time the kernel scheduler
needs to gain control for its scheduling activities like
timers, timeouts, time slicing, idling etc.

3. The scheduler provides interfaces that the timer drivers
use to announce elapsed time and get the next time the scheduler
needs a timer event. It is possible that the scheduler may not
need another timer event, in which case the system would wait
for a non-timer event to wake it up if it is idling.

4. New APIs are defined to be implemented by timer drivers. Also
they need to handler timer events differently. These changes
have been done in the HPET timer driver. In future other timers
that support tickles kernel should implement these APIs as well.
These APIs are to re-program the timer, update and announce
elapsed time.

5. Philosopher and timer_api applications have been enabled to
test tickless kernel. Separate configuration files are created
which define the necessary CONFIG flags. Run these apps using
following command
make pristine && make BOARD=qemu_x86 CONF_FILE=prj_tickless.conf qemu

Jira: ZEP-339 ZEP-1946 ZEP-948
Change-Id: I7d950c31bf1ff929a9066fad42c2f0559a2e5983
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:28 +00:00
Ramesh Thomas 62eea121b3 kernel: tickless: Rename _Swap to allow creation of macro
Future tickless kernel patches would be inserting some
code before call to Swap. To enable this it will create
a mcro named as the current _Swap which would call first
the tickless kernel code and then call the real __swap()

Jira: ZEP-339
Change-Id: Id778bfcee4f88982c958fcf22d7f04deb4bd572f
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:26 +00:00
Andrew Boie 73abd32a7d kernel: expose struct k_thread implementation
Historically, space for struct k_thread was always carved out of the
thread's stack region. However, we want more control on where this data
will reside; in memory protection scenarios the stack may only be used
for actual stack data and nothing else.

On some platforms (particularly ARM), including kernel_arch_data.h from
the toplevel kernel.h exposes intractable circular dependency issues.
We create a new per-arch header "kernel_arch_thread.h" with very limited
scope; it only defines the three data structures necessary to instantiate
the arch-specific bits of a struct k_thread.

Change-Id: I3a55b4ed4270512e58cf671f327bb033ad7f4a4f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-26 16:29:06 +00:00
Vincenzo Frascino dfed8c4874 kernel: Add stack_info to k_thread
This patck adds the stack information into the k_thread data structure.
The information will be set by when creating a new thread (_new_thread)
and will be used by the scheduling process.

Change-Id: Ibe79fe92a9ef8bce27bf8616d8e0c878508c267d
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
2017-04-25 16:02:38 +00:00
Leandro Pereira ffe74b45fa kernel: Add thread events to kernel event logger
This adds a new event type to the kernel event logger that tracks
thread-related events: being added to the ready queue, pending a
thread, and exiting a thread.

It's the only event type that contains "subevents" and thus has a
non-void parameter in their respective _sys_k_event_logger_*()
function.  Luckily, as isn't the case with other events (such as IRQs
and thread switching), these functions are called from
platform-agnostic places, so there's no need to worry about changing
the assembly guts.

This is the first patch in a series adding support for better real-time
profiling of Zephyr applications.

Jira: ZEP-1463
Change-Id: I6d63607ba347f7a9cac3d016fef8f5a0a830e267
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-04-25 02:16:36 +00:00
Kumar Gala 96ee45df8d kernel: refactor thread_monitor_init into common code
We do the same thing on all arch's right now for thread_monitor_init so
lets put it in a common place.  This also should fix an issue on xtensa
when thread monitor can be enabled (reference to _nanokernel.threads).

Change-Id: If2f26c1578aa1f18565a530de4880ae7bd5a0da2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:42 +00:00
Kumar Gala b8823c4efd kernel: Refactor common _new_thread init code
We do a bit of the same stuff on all the arch's to setup a new thread.
So lets put that code in a common place so we unify it for everyone and
reduce some duplicated code.

Change-Id: Ic04121bfd6846aece16aa7ffd4382bdcdb6136e3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:42 +00:00
Kumar Gala 5742a508a2 kernel: cleanup use of naked unsigned in _new_thread
There are a few places that we used an naked unsigned type, lets be
explicit and make it 'unsigned int'.

Change-Id: I33fcbdec4a6a1c0b1a2defb9a5844d282d02d80e
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:41 +00:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Kumar Gala 34a57db844 Revert "kernel: Convert formatter strings to use PRI defines"
This reverts commit 7b9dc107a8.

We revert this as we intent to move away from {u}int{8,16,32,64}_t types
to our own internal types for sized variables so we shouldn't need the
PRI macros anymore.

Change-Id: I1d9d797fee47ca266867ae65656c150f8fe2adb2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-19 10:50:51 -05:00
Anas Nashif 306e15e0a1 kernel: remove legacy kernel support
Change-Id: Iac1e21677d74f81a93cd29d64cce261676ae78a6
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 15:48:37 +00:00
Kumar Gala 7b9dc107a8 kernel: Convert formatter strings to use PRI defines
To allow for various libc implementations (like newlib) in which the way
various {u}int{8,16,32}_t types are defined vary between both libc
implementations and across architectures we need to utilize the PRI
defines.

Change-Id: Ie884fb67015502288152ecbd64c37961a4f538e4
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-17 11:09:36 -05:00
Benjamin Walsh 5d35dba73d kernel/timeouts: add description of timeouts queued on the same tick
Change-Id: I24ba889e3174b903ccea5309ad45e2b4d1755fe1
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:25 +00:00
Benjamin Walsh cf93743f50 kernel/sched: refactor _get_first_thread_to_unpend()
Modify _get_first_thread_to_unpend() so that it does not remove the
thread from the wait queue. Rename it to _find_first_thread_to_unpend()
to match the new behaviour.

This will be needed to fix a semaphore group bug.

Change-Id: I1b7531c3beecf3b6a86ecf88a93a02449edd0767
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:24 +00:00
Benjamin Walsh c1405a7d6b kernel/sched: add _is_thread_dummy()
Rather than explicitely checking the thread state bit.

Change-Id: Ic78427d9847e627a0e91d0147d3b6164450597f6
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:24 +00:00
Benjamin Walsh 8d7c274e55 kernel/sched: protect thread sched_lock with compiler barriers
This has not bitten us yet, but it was a ticking timebomb.

This is similar to the issue that was found with irq_lock/irq_unlock
implementations on several architectures. Having a volatile variable is
not the way to force the sched_lock variable to be
incremented/decremented around the accesses to data it protects.
Instead, a compiler barrier must prevent the compiler from reordering
the memory accesses around setting of sched_lock. Needed in the inline
implementations _sched_lock()/_sched_unlock_no_reschedule(), which
resolve to simple decrement/increment of the per-thread sched_lock
variable.

Change-Id: I06f5b3524889f193efe69caa947118404b1be0b5
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:21 +00:00
Luiz Augusto von Dentz 41921dd5b9 kernel: Use SYS_DLIST_FOR_EACH_CONTAINER
Change-Id: I4cbb12af487217cfcb78969ec88a8e4c06eca27f
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-10 16:16:14 +00:00
Benjamin Walsh fcdb0fd6ea kernel: add _WAIT_Q_INIT()
Dissociate wait queue initialization from doubly-linked lists if the
underlying implementation is to be abstracted.

Change-Id: Id7544c6ac506643437f9c4f0ae97e7eecab8d06d
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:30:00 +00:00
Benjamin Walsh 0de9487351 kernel: add _THREAD_POLLING thread state
Will be needed for k_poll() API.

Change-Id: I0ebe4be5a9c56df2ebb8496dc49c894e982e6008
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:29:59 +00:00
Benjamin Walsh 0a49ba38b8 kernel: add _is_thread_state_set()
Change-Id: I2b6a51c23997afeb5252a3632172156ba96252ce
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:29:58 +00:00
Benjamin Walsh ed240f2796 kernel/arch: streamline thread user options
The K_<thread option> flags/options avaialble to users were hidden in
the kernel private header files: move them to include/kernel.h to
publicize them.

Also, to avoid any future confusion, rename the k_thread.execution_flags
field to user_options.

Change-Id: I65a6fd5e9e78d4ccf783f3304b607a1e6956aeac
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:50 +00:00
Benjamin Walsh 867f8ee371 kernel: move K_ESSENTIAL from thread_state to execution_flags
The execution_flags will store the user-facing states of a thread.

This also fixes a bug where K_ESSENTIAL was already assigned to
execution_flags via the options field of
k_thread_spawn()/K_THREAD_DEFINE().

Change-Id: I91ad7a62b5d180e09eead8985ff519809959ecf2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
Benjamin Walsh a8978aba8f kernel: rename thread states symbols
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.

Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
Benjamin Walsh 3f3f4d94d5 kernel: remove K_STATIC
Unused.

Reuse bit for K_FP_REGS to keep the used bits the lowest possible.

Change-Id: I5998801ef34156271d4f66d1948a05e0b2ce58f7
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:48 +00:00
Benjamin Walsh dfa7ce5c94 kernel: include kernel.h in kernel_structs.h in asm files
This will be needed for some thread user options that will move to
kernel.h since they are part of the user API.

Change-Id: I46e302b6cafcdddbad3458134b98feb5b8d45d9b
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:48 +00:00
Benjamin Walsh a5d8461d74 kernel: move volatile from k_thread.prio to k_thread.sched_locked
When prio and sched_locked were moved into a struct together to create a
union with the combined preempt field, the volatile qualifier moved from
sched_locked to prio by mistake.

Change-Id: I5a8e01324f14e77e3d7162c12515471826023633
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:47 +00:00
David B. Kinder ac74d8b652 license: Replace Apache boilerplate with SPDX tag
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.

Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.

Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file.  Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.

Jira: ZEP-1457

Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-19 03:50:58 +00:00
Benjamin Walsh 2f280416e6 kernel: fix total number of coop prios in coop-only mode
The idle priority was not accounted for.

With this change, the philosophers demo runs in coop-only mode.

Change-Id: I23db33687bcf3b2107d5fc07977143730f62e476
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-17 12:17:27 +00:00
Benjamin Walsh 168695c7ef kernel/arch: inspect prio/sched_locked together for preemptibility
These two fields in the thread structure control the preemptibility of a
thread.

sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.

By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.

Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:25 +00:00
Benjamin Walsh f955476559 kernel/arch: optimize memory use of some thread fields
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.

- prio can fit in one byte, limiting the priorities range to -128 to 127

- recursive scheduler locking can be limited to 255; a rollover results
  most probably from a logic error

- flags are split into execution flags and thread states; 8 bits is
  enough for each of them currently, with at worst two states and four
  flags to spare (on x86, on other archs, there are six flags to spare)

Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.

Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:24 +00:00
Anas Nashif f6e039062a kernel: remove dependency on CONFIG_NANO_TIMERS/TIMEOUTS
Remove legacy option and use SYS_CLOCK_EXISTS where appropriate.

Change-Id: I3d524ea2776e638683f0196c0cc342359d5d810f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-08 18:09:52 +00:00
Benjamin Walsh 66b99f1486 kernel: add _timeout_q dump before and after adding timeout
Kernel debugging aid.

Change-Id: I852ba2f626f133d943be2ecac41354fecca478d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:27 +00:00
Benjamin Walsh 99eef25815 kernel: do not use sys_dlist_insert_at() in _add_timeout()
Similar to _pend_queue, it's more efficient to do the logic inline.

Change-Id: I68ac4fbc26c97b6ec9322caef98504ff6ccc8727
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:26 +00:00
Benjamin Walsh d779f3d240 kernel/arch: streamline thread flag bits used
Use least significant bits for common flags and high bits for
arch-specific ones.

Change-Id: I982719de4a24d3588c19a0d30bbe7a27d9a99f13
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh e6a69cae54 kernel/arch: reverse polarity on sched_locked
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.

Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh 04ed860c68 kernel: make _thread.sched_locked a non-atomic operator variable
Not needed, since only the thread itself can modifiy its own
sched_locked count.

Change-Id: I3d3d8be548d2b24ca14f51637cc58bda66f8b9ee
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:23 +00:00
Benjamin Walsh 6209218f40 kernel: optimize ms-to-ticks for certain tick frequencies
Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.

- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
  shift since they are power-of-two factors of 1000.

In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.

These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).

Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.

Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:07 +00:00
Benjamin Walsh eec37e6752 kernel: add flag that tells the system is handling timeouts
This limits the execution contexts that will go over the loop in
_unpend_first_thread() to only ISRs of very high priority that are
preempting the system clock timer ISR, and only during the time it is
handling timeouts.

Change-Id: Iaf0500d28a2de5e077c9cf9861a5a70244127d58
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:05 +00:00
Anas Nashif dc3d73bf58 kernel: fix all nanokernel usage in comments
Also include kernel.h instead of nanokernel.h

Change-Id: I65dc5e31b5409b809397296817e2d5e7adf28892
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-21 18:45:03 +00:00
Anas Nashif d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00