Commit graph

463 commits

Author SHA1 Message Date
Andy Ross 4ca0e07088 kernel: Add _unpend_all convenience wrapper to scheduler API
Refactoring.  Mempool wants to unpend all threads at once.  It's
cleaner to do this in the scheduler instead of the IPC code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie 8345e5ebf0 syscalls: remove policy from handler checks
The various macros to do checks in system call handlers all
implictly would generate a kernel oops if a check failed.
This is undesirable for a few reasons:

* System call handlers that acquire resources in the handler
  have no good recourse for cleanup if a check fails.
* In some cases we may want to propagate a return value back
  to the caller instead of just killing the calling thread,
  even though the base API doesn't do these checks.

These macros now all return a value, if nonzero is returned
the check failed. K_OOPS() now wraps these calls to generate
a kernel oops.

At the moment, the policy for all APIs has not changed. They
still all oops upon a failed check/

The macros now use the Z_ notation for private APIs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie 92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie 42a2c96422 newlib: fix heap user mode access for MPU devices
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.

MPU devices that don't have these restrictions allocate
the heap as normal.

In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.

Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.

On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.

Fixes: #6814

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-10 15:09:02 -07:00
Andy Ross 15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Leandro Pereira c200367b68 drivers: Perform a runtime check if a driver is capable of an operation
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.

Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.

Fixes #6907.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-26 02:57:12 +05:30
Andy Ross e7ded11a2e kernel: Prune ksched.h of dead code
There was a ton of junk in this header.  Pare it down to just the
stuff actually used by code outside of sched.c, move the needed
internal stuff into sched.c itself, and drop everything else.

Note that (other than the tiny inlines that remain here in the header)
the scheduler interface exposed to the rest of the system is now
composed of just 12 functions.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-25 13:13:23 -07:00
Andy Ross 8a4b2e8cf2 kernel, posix: Move ready_one_thread() to scheduler
The POSIX layer had a simple ready_one_thread() utility.  Move this to
the scheduler API (with a prepended underscore -- it's an internal
API) so that it can be synchronized along with the rest of the
scheduler.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 22642cf309 kernel: Clean up _unpend_thread() API
Almost everywhere this was called, it was immediately followed by
_abort_thread_timeout(), for obvious reasons.  The only exceptions
were in timeout and k_timer expiration (unifying these two would be
another good cleanup), which are peripheral parts of the scheduler and
can plausibly use a more "internal" API.

So make the common case the default, and expose the old behavior as
_unpend_thread_no_timeout().  (Along with identical changes for
_unpend_first_thread) Saves code bytes and simplifies scheduler
surface area for future synchronization work.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 15cb5d7293 kernel: Further unify _reschedule APIs
Now that other work has eliminated the two cases where we had to do a
reschedule "but yield even if we are cooperative", we can squash both
down to a single _reschedule() function which does almost exactly what
legacy _Swap() did, but wrapped as a proper scheduler API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross e0a572beeb kernel: Refactor, unifying _pend_current_thread() + _Swap() idiom
Everywhere the current thread is pended, the code is going to have to
do a _Swap() soon afterward, yet the scheduler API exposed these as
separate steps.  Unify this pattern everywhere it appears, which saves
some code bytes and gets _Swap() out of the general scheduler API at
zero cost.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross 8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Leandro Pereira 541c3cb18b kernel: sched: Fix validation of priority levels
A priority value cannot be simultaneously higher than the maximum
possible value and smaller than the minimum value.  Rewrite the
_VALID_PRIO() macro as a function so that this if either of these
invariants are invalid, the priority is considered invalid.

Coverity-CID: 182584
Coverity-CID: 182585
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-21 08:39:42 -07:00
Anas Nashif c7f5cc9bcb license: fix spdx identifier in a few files
Use correct SPDX identifier for Apache 2.0.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-04-12 15:19:51 -04:00
Kristian Klomsten Skordal c39e2a2d6c kernel: Fix left shift into sign bit
The result of left shifting a bit into the sign-bit is undefined
behavior. This makes the offending shift operation unsigned.

Signed-off-by: Kristian Klomsten Skordal <kristian.skordal@nordicsemi.no>
2018-03-22 19:16:17 -04:00
Andy Ross 85bc0a3fe6 kernel: Cleanup, unify _add_thread_to_ready_q() and _ready_thread()
The scheduler exposed two APIs to do the same thing:
_add_thread_to_ready_q() was a low level primitive that in most cases
was wrapped by _ready_thread(), which also (1) checks that the thread
_is_ready() or exits, (2) flags the thread as "started" to handle the
case of a thread running for the first time out of a waitq timeout,
and (3) signals a logger event.

As it turns out, all existing usage was already checking case #1.
Case #2 can be better handled in the timeout resume path instead of on
every call.  And case #3 was probably wrong to have been skipping
anyway (there were paths that could make a thread runnable without
logging).

Now _add_thread_to_ready_q() is an internal scheduler API, as it
probably always should have been.

This also moves some asserts from the inline _ready_thread() wrapper
to the underlying true function for code size reasons, otherwise the
extra use of the inline added by this patch blows past code size
limits on Quark D2000.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andy Ross 9d367eeb0a xtensa, kernel/sched: Move next switch_handle selection to the scheduler
The xtensa asm2 layer had a function to select the next switch handle
to return into following an exception.  There is no arch-specific code
there, it's just scheduler logic.  Move it to the scheduler where it
belongs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-03-18 16:58:12 -04:00
Andy Ross 28192fd8ea kernel/kswap.h: Hook event logger from switch-based _Swap
The new generic _Swap() forgot the event logger hook

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 564f59060c kernel: SMP timer integration
In SMP, the system timer is used for timeslicing on auxiliary CPUs,
but the base system timekeeping via _nano_sys_clock_tick_announce() is
still done on CPU0 only (because the framework isn't prepared for
asynchronous notification yet).  Skip processing on CPU1+.

Also, due to a hardware interaction* that is difficult to work around,
timer initialization on the auxiliary CPUs is done at the very end of
the CPU bringup, just before the swap into the scheduler.  A
smp_timer_init() API has been added for this purpose.

* On ESP-32, enabling the timer seems to result in a near-synchronous
  interrupt being delivered despite my best attempts to keep it
  masked, then blowing things up because the CPU record isn't set up
  to handle it yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross bdcd18a744 kernel: Enable SMP
Now that all the pieces are in place, enable SMP for real:

Initialize the CPU records, launch the CPUs at the end of kernel
initialization, have them wait for a flag to release them into the
scheduler, then enter into the runnable threads via _Swap().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross e694656345 kernel: Move per-cpu _kernel_t fields into separate struct
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these.  Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.

When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record.  This permits compatibility with legacy
assembly on other platforms.  Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.

Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 2c1449bc81 kernel, xtensa: Switch-specific thread return value
When using _arch_switch() context switching, the thread return value
is a generic hook and not provided by the architecture.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 042d8ecca9 kernel: Add alternative _arch_switch context switch primitive
The existing __swap() mechanism is too high level for some
applications because of its scheduler-awareness.  This introduces a
new _arch_switch() mechanism, which is a simpler primitive that looks
like:

    void _arch_switch(void *handle, void **old_handle_out);

The new thread handle (typically just a stack pointer) is specified
explicitly instead of being picked up from the scheduler by
per-architecture code, and on return the "old" thread handle that got
switched out is returned through the pointer.

The new primitive (currently available only on xtensa) is selected
when CONFIG_USE_SWITCH is "y".  A new C _Swap() implementation based
on this primitive is then added which operates compatibly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross 8ac9c082e6 kernel: Move some macros
K_NUM_PRIORITIES and K_NUM_PRIO_BITMAPS were defined in
nano_internal.h, but used in only a handful of places.  Move to
kernel_structs.h (somewhat higher up in the hierarchy) to help with
include file cycle-breaking.  Arguably they are a better fit there
anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Ramakrishna Pallala 301acb8e1b kernel: include: rename nano_internal.h to kernel_internal.h
Rename the nano_internal.h to kernel_internal.h and modify the
header file name accordingly wherever it is used.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-01-31 10:07:21 -06:00
Adithya Baglody 13ac4d4264 kernel: mem_domain: Add an arch interface to configure memory domain
Add an architecure specfic code for the memory domain
configuration. This is needed to support a memory domain API
k_mem_domain_add_thread.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-12-21 11:52:27 -08:00
Andrew Boie 9f38d2a91a kernel: have k_sched_lock call _sched_lock
Having two implementations of the same thing is bad,
especially when one can just call the other inline version.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-17 17:42:54 -05:00
Adithya Baglody 57832073c6 kernel: arch interface for memory domain
Additional arch specific interfaces to handle memory domain
destroy and single partition removal.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Andrew Boie 818a96d3af userspace: assign thread IDs at build time
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:

* Threads can be granted permissions on kernel objects before the
  thread is initialized. Previously, it was necessary to call
  k_thread_create() with a K_FOREVER delay, assign permissions, then
  start the thread. Permissions are still completely cleared when
  a thread exits.

* No need for runtime logic to manage thread IDs

* Build error if CONFIG_MAX_THREAD_BYTES is set too low

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-03 11:29:23 -07:00
Anas Nashif 780324b8ed cleanup: rename fiber/task -> thread
We still have many places talking about tasks and threads, replace those
with thread terminology.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-10-30 18:41:15 -04:00
Andrew Boie 98bf5234dc Revert "kernel: arch interface for memory domain"
This reverts commit 9bbe7bd61e.
2017-10-20 15:02:59 -04:00
Adithya Baglody 9bbe7bd61e kernel: arch interface for memory domain
Additional arch specific interfaces to handle memory domain
destroy and single partition removal.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-20 10:39:51 -07:00
David B. Kinder 4600c37ff1 doc: Fix misspellings in header/doxygen comments
Occasional scan for misspellings missed during PR reviews

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2017-10-17 19:40:29 -04:00
Andrew Boie c5c104f91e kernel: fix k_thread_stack_t definition
Currently this is defined as a k_thread_stack_t pointer.
However this isn't correct, stacks are defined as arrays. Extern
references to k_thread_stack_t doesn't work properly as the compiler
treats it as a pointer to the stack array and not the array itself.

Declaring as an unsized array of k_thread_stack_t doesn't work
well either. The least amount of confusion is to leave out the
pointer/array status completely, use pointers for function prototypes,
and define K_THREAD_STACK_EXTERN() to properly create an extern
reference.

The definitions for all functions and struct that use
k_thread_stack_t need to be updated, but code that uses them should
be unchanged.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-17 08:24:29 -07:00
Andrew Boie a2b40ecfaf userspace handlers: finer control of init state
We also need macros to assert that an object must be in an
uninitialized state. This will be used for validating thread
and stack objects to k_thread_create(), which must not be already
in use.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 19:02:00 -07:00
Andrew Boie 04caa679c9 userspace: allow thread IDs to be re-used
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.

Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.

Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie 885fcd5147 userspace: de-initialize aborted threads
This will allow these thread objects to be re-used.

_mark_thread_as_dead() removed, it was only being called in one
place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Andrew Boie 4a9a4240c6 userspace: add _k_object_uninit()
API to assist with re-using objects, such as terminated threads or
kernel objects returned to a pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 16:16:28 -07:00
Leandro Pereira 6f99bdb02a kernel: Provide only one _SYSCALL_HANDLER() macro
Use some preprocessor trickery to automatically deduce the amount of
arguments for the various _SYSCALL_HANDLERn() macros.  Makes the grunt
work of converting a bunch of kernel APIs to system calls slightly
easier.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-10-16 13:42:15 -04:00
Andrew Boie a89bf01192 kernel: add k_object_access_revoke() system call
Does the opposite of k_object_access_grant(); the provided thread will
lose access to that kernel object.

If invoked from userspace the caller must hace sufficient access
to that object and permission on the thread being revoked access.

Fix documentation for k_object_access_grant() API to reflect that
permission on the thread parameter is needed as well.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-13 15:08:40 -07:00
Andrew Boie 47f8fd1d4d kernel: add K_INHERIT_PERMS flag
By default, threads are created only having access to their own thread
object and nothing else. This new flag to k_thread_create() gives the
thread access to all objects that the parent had at the time it was
created, with the exception of the parent thread itself.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-13 12:17:13 -07:00
Andrew Boie 225e4c0e76 kernel: greatly simplify syscall handlers
We now have macros which should significantly reduce the amount of
boilerplate involved with defining system call handlers.

- Macros which define the proper prototype based on number of arguments
- "SIMPLE" variants which create handlers that don't need anything
  other than object verification

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 7e3d3d782f kernel: userspace.c code cleanup
- Dumping error messages split from _k_object_validate(), to avoid spam
  in test cases that are expected to have failure result.

- _k_object_find() prototype moved to syscall_handler.h

- Clean up k_object_access() implementation to avoid double object
  lookup and use single validation function

- Added comments, minor whitespace changes

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-12 16:26:28 -05:00
Andrew Boie 38ac235b42 syscall_handler: handle multiplication overflow
Computing the total size of the array need to handle the case where
the product overflow a 32-bit unsigned integer.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 32a08a81ab syscall_handler: introduce new macros
Instead of boolean arguments to indicate memory read/write
permissions, or init/non-init APIs, new macros are introduced
which bake the semantics directly into the name of the macro.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 231b95cfc0 syscalls: add _SYSCALL_VERIFY_MSG()
Expecting stringified expressions to be completely comprehensible to end
users is wishful thinking; we really need to express what a failed
system call verification step means in human terms in most cases.

Memory buffer and kernel object checks now are implemented in terms of
_SYSCALL_VERIFY_MSG.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie cee72411e4 userspace: move _k_object_validate() definition
This API only gets used inside system call handlers and a specific test
case dedicated to it. Move definition to the private kernel header along
with the rest of the defines for system call handlers.

A non-userspace inline variant of this function is unnecessary and has
been deleted.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-11 17:54:47 -07:00
Andrew Boie 468190a795 kernel: convert most thread APIs to system calls
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-07 10:45:15 -07:00
Chunlin Han e9c9702818 kernel: add memory domain APIs
Add the following application-facing memory domain APIs:

k_mem_domain_init() - to initialize a memory domain
k_mem_domain_destroy() - to destroy a memory domain
k_mem_domain_add_partition() - to add a partition into a domain
k_mem_domain_remove_partition() - to remove a partition from a domain
k_mem_domain_add_thread() - to add a thread into a domain
k_mem_domain_remove_thread() - to remove a thread from a domain

A memory domain would contain some number of memory partitions.
A memory partition is a memory region (might be RAM, peripheral
registers, flash...) with specific attributes (access permission,
e.g. privileged read/write, unprivileged read-only, execute never...).
Memory partitions would be defined by set of MPU regions or MMU tables
underneath.
A thread could only belong to a single memory domain any point in time
but a memory domain could contain multiple threads.
Threads in the same memory domain would have the same access permission
to the memory partitions belong to the memory domain.

The memory domain APIs are used by unprivileged threads to share data
to the threads in the same memory and protect sensitive data from
threads outside their domain. It is not only for improving the security
but also useful for debugging (unexpected access would cause exception).

Jira: ZEP-2281

Signed-off-by: Chunlin Han <chunlin.han@linaro.org>
2017-09-29 16:48:53 -07:00
Andrew Boie cbf7c0e47a syscalls: implicit cast for _SYSCALL_MEMORY
Everything get passed to handlers as u32_t, make it simpler to check
something that is known to be a pointer, like we already do with
_SYSCALL_IS_OBJ().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-29 15:43:30 -07:00
Andrew Boie fa94ee7460 syscalls: greatly simplify system call declaration
To define a system call, it's now sufficient to simply tag the inline
prototype with "__syscall" or "__syscall_inline" and include a special
generated header at the end of the header file.

The system call dispatch table and enumeration of system call IDs is now
automatically generated.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-29 13:02:20 -07:00
Andrew Boie 52563e3b09 syscall_handler.h: fix a typo
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-28 10:05:46 -07:00
Andrew Boie 13ca6fe284 syscalls: reorganize headers
- syscall.h now contains those APIs needed to support invoking calls
  from user code. Some stuff moved out of main kernel.h.
- syscall_handler.h now contains directives useful for implementing
  system call handler functions. This header is not pulled in by
  kernel.h and is intended to be used by C files implementing kernel
  system calls and driver subsystem APIs.
- syscall_list.h now contains the #defines for system call IDs. This
  list is expected to grow quite large so it is put in its own header.
  This is now an enumerated type instead of defines to make things
  easier as we introduce system calls over the new few months. In the
  fullness of time when we desire to have a fixed userspace/kernel ABI,
  this can always be converted to defines.

Some new code added:

- _SYSCALL_MEMORY() macro added to check memory regions passed up from
  userspace in handler functions
- _syscall_invoke{7...10}() inline functions declare for invoking system
  calls with more than 6 arguments. 10 was chosen as the limit as that
  corresponds to the largest arg list we currently have
  which is for k_thread_create()

Other changes

- auto-generated K_SYSCALL_DECLARE* macros documented
- _k_syscall_table in userspace.c is not a placeholder. There's no
  strong need to generate it and doing so would require the introduction
  of a third build phase.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-28 08:56:20 -07:00
Chunlin Han 95d28e53bb arch: arm: add initial support for CONFIG_USERSPACE
add related configs & (stub) functions for enabling
CONFIG_USERSPACE on arm w/o build errors.

Signed-off-by: Chunlin Han <chunlin.han@linaro.org>
2017-09-26 10:00:53 -07:00
Andrew Boie a23c245a9a userspace: flesh out internal syscall interface
* Instead of a common system call entry function, we instead create a
table mapping system call ids to handler skeleton functions which are
invoked directly by the architecture code which receives the system
call.

* system call handler prototype specified. All but the most trivial
system calls will implement one of these. They validate all the
arguments, including verifying kernel/device object pointers, ensuring
that the calling thread has appropriate access to any memory buffers
passed in, and performing other parameter checks that the base system
call implementation does not check, or only checks with __ASSERT().

It's only possible to install a system call implementation directly
inside this table if the implementation has a return value and requires
no validation of any of its arguments.

A sample handler implementation for k_mutex_unlock() might look like:

u32_t _syscall_k_mutex_unlock(u32_t mutex_arg, u32_t arg2, u32_t arg3,
                              u32_t arg4, u32_t arg5, void *ssf)
{
        struct k_mutex *mutex = (struct k_mutex *)mutex_arg;
        _SYSCALL_ARG1;

        _SYSCALL_IS_OBJ(mutex, K_OBJ_MUTEX, 0,  ssf);
        _SYSCALL_VERIFY(mutex->lock_count > 0, ssf);
        _SYSCALL_VERIFY(mutex->owner == _current, ssf);

        k_mutex_unlock(mutex);

        return 0;
}

* the x86 port modified to work with the system call table instead of
calling a common handler function. fixed an issue where registers being
changed could confuse the compiler has been fixed; all registers, even
ones used for parameters, must be preserved across the system call.

* a new arch API for producing a kernel oops when validating system call
arguments added. The debug information reported will be from the system
call site and not inside the handler function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-15 13:44:45 -07:00
Andrew Boie be6740ea77 kernel: define arch interface for memory domains
Based on work by Chunlin Han <chunlin.han@linaro.org>.
This defines the interfaces that architectures will need to implement in
order to support memory domains in either MMU or MPU hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-14 08:59:54 -07:00
Andrew Boie 2acfcd6b05 userspace: add thread-level permission tracking
Now creating a thread will assign it a unique, monotonically increasing
id which is used to reference the permission bitfield in the kernel
object metadata.

Stub functions in userspace.c now implemented.

_new_thread is now wrapped in a common function with pre- and post-
architecture thread initialization tasks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 5cfa5dc8db kernel: add K_USER flag and _is_thread_user()
Indicates that the thread is configured to run in user mode.
Delete stub function in userspace.c

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie f564986d2f kernel: add _k_syscall_entry stub
This is the kernel-side landing site for system calls. It's currently
just a stub.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie 1f32d09bd8 kernel: specify arch functions for userspace
Any arches that support userspace will need to implement these
functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Andrew Boie f5adf534e8 kernel: declare interface for checking buffers
This will be used by system call handlers to ensure that any memory
regions passed in from userspace are actually accessible by the calling
thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 08:40:41 -07:00
Andrew Boie 1e06ffc815 zephyr: use k_thread_entry_t everywhere
In various places, a private _thread_entry_t, or the full prototype
were being used. Be consistent and use the same typedef everywhere.

Signen-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-11 11:18:22 -07:00
Youvedeep Singh d787e3c554 timer: k_timer_start should accept 0 as duration parameter.
k_timer_start(timer, duration, period) is API used to
start a timer. Currently duration parameters accepts
only positive number.
But a user may require to do some periodic activity
ASAP and start timer with 0 value. So this patch
allows 0 as minimum value of duration.
In this patch, when duration value is set as 0 then
timer expiration handler is called instead of submiting
this into timeout queue.

Jira: ZEP-2497

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-09-06 10:18:39 -07:00
Luiz Augusto von Dentz 87aa621915 kernel: Use SYS_DLIST_FOR_EACH_CONTAINER whenever possible
SYS_DLIST_FOR_EACH_CONTAINER is preferable over using
SYS_DLIST_FOR_EACH_NODE as that avoid casting directly which assumes the
node field is always at the beginning.

Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-08-25 09:08:50 -04:00
Andrew Boie 507852a4ad kernel: introduce opaque data type for stacks
Historically, stacks were just character buffers and could be treated
as such if the user wanted to look inside the stack data, and also
declared as an array of the desired stack size.

This is no longer the case. Certain architectures will create a memory
region much larger to account for MPU/MMU guard pages. Unfortunately,
the kernel interfaces treat both the declared stack, and the valid
stack buffer within it as the same char * data type, even though these
absolutely cannot be used interchangeably.

We introduce an opaque k_thread_stack_t which gets instantiated by
K_THREAD_STACK_DECLARE(), this is no longer treated by the compiler
as a character pointer, even though it really is.

To access the real stack buffer within, the result of
K_THREAD_STACK_BUFFER() can be used, which will return a char * type.

This should catch a bunch of programming mistakes at build time:

- Declaring a character array outside of K_THREAD_STACK_DECLARE() and
  passing it to K_THREAD_CREATE
- Directly examining the stack created by K_THREAD_STACK_DECLARE()
  which is not actually the memory desired and may trigger a CPU
  exception

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-08-01 16:43:15 -07:00
Andrew Boie ae1a75b82e stack_sentinel: change cooperative check
One of the stack sentinel policies was to check the sentinel
any time a cooperative context switch is done (i.e, _Swap is
called).

This was done by adding a hook to _check_stack_sentinel in
every arch's __swap function.

This way is cleaner as we just have the hook in one inline
function rather than implemented in several different assembly
dialects.

The check upon interrupt is now made unconditionally rather
than checking if we are calling __swap, since the check now
is only called on cooperative _Swap(). The interrupt is always
serviced first.

Issue: ZEP-2244
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-08 13:49:36 -05:00
Andrew Boie 3989de7e3b kernel: fix short time-slice reset
The kernel tracks time slice usage with the _time_slice_elapsed global.
Every time the timer interrupt goes off and the timer driver calls
_nano_sys_clock_tick_announce() with the elapsed time, this is added to
_time_slice_elapsed. If it exceeds the total time slice, the thread is
moved to the back of the queue for that priority level and
_time_slice_elapsed is reset to zero.

In a non-tickless kernel, this is the only time _time_slice_elapsed is
reset.  If a thread uses up a partial time slice, and then cooperatively
switches to another thread, the next thread will inherit the remaining
time slice, causing it not to be able to run as long as it ought to.

There does exist code to properly reset the elapsed count, but it was
only compiled in a tickless kernel. Now it is built any time
CONFIG_TIMESLICING is enabled.

Issue: ZEP-2107
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-02 14:47:01 -04:00
Maciek Borzecki ed016fa9a0 kernel: make sure that _thread_entry() declaration matches with definition
Fixes sparse warning:
  CHECK   <snip>/zephyr/kernel/thread.c
<snip>/zephyr/kernel/thread.c:184:20: error: symbol '_thread_entry' redeclared with different type (originally declared at <snip>/zephyr/kernel/include/nano_internal.h:43) - different modifiers
  CC      kernel/thread.o

Change-Id: I2223493cdf97c811c661773f8fd430e6c00cbaa0
Signed-off-by: Maciek Borzecki <maciek.borzecki@gmail.com>
2017-05-18 12:41:56 -05:00
Andrew Boie 5dcb279df8 debug: add stack sentinel feature
This places a sentinel value at the lowest 4 bytes of a stack
memory region and checks it at various intervals, including when
servicing interrupts or context switching.

This is implemented on all arches except ARC, which supports stack
bounds checking directly in hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie 41c68ece83 kernel: publish offsets to thread stack info
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie d26cf2dc33 kernel: add k_thread_create() API
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.

This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.

By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.

Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-11 20:24:22 -04:00
Ramesh Thomas 89ffd44dfb kernel: tickless: Add tickless kernel support
Adds event based scheduling logic to the kernel. Updates
management of timeouts, timers, idling etc. based on
time tracked at events rather than periodic ticks. Provides
interfaces for timers to announce and get next timer expiry
based on kernel scheduling decisions involving time slicing
of threads, timeouts and idling. Uses wall time units instead
of ticks in all scheduling activities.

The implementation involves changes in the following areas

1. Management of time in wall units like ms/us instead of ticks
The existing implementation already had an option to configure
number of ticks in a second. The new implementation builds on
top of that feature and provides option to set the size of the
scheduling granurality to mili seconds or micro seconds. This
allows most of the current implementation to be reused. Due to
this re-use and co-existence with tick based kernel, the names
of variables may contain the word "tick". However, in the
tickless kernel implementation, it represents the currently
configured time unit, which would be be mili seconds or
micro seconds. The APIs that take time as a parameter are not
impacted and they continue to pass time in mili seconds.

2. Timers would not be programmed in periodic mode
generating ticks. Instead they would be programmed in one
shot mode to generate events at the time the kernel scheduler
needs to gain control for its scheduling activities like
timers, timeouts, time slicing, idling etc.

3. The scheduler provides interfaces that the timer drivers
use to announce elapsed time and get the next time the scheduler
needs a timer event. It is possible that the scheduler may not
need another timer event, in which case the system would wait
for a non-timer event to wake it up if it is idling.

4. New APIs are defined to be implemented by timer drivers. Also
they need to handler timer events differently. These changes
have been done in the HPET timer driver. In future other timers
that support tickles kernel should implement these APIs as well.
These APIs are to re-program the timer, update and announce
elapsed time.

5. Philosopher and timer_api applications have been enabled to
test tickless kernel. Separate configuration files are created
which define the necessary CONFIG flags. Run these apps using
following command
make pristine && make BOARD=qemu_x86 CONF_FILE=prj_tickless.conf qemu

Jira: ZEP-339 ZEP-1946 ZEP-948
Change-Id: I7d950c31bf1ff929a9066fad42c2f0559a2e5983
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:28 +00:00
Ramesh Thomas 62eea121b3 kernel: tickless: Rename _Swap to allow creation of macro
Future tickless kernel patches would be inserting some
code before call to Swap. To enable this it will create
a mcro named as the current _Swap which would call first
the tickless kernel code and then call the real __swap()

Jira: ZEP-339
Change-Id: Id778bfcee4f88982c958fcf22d7f04deb4bd572f
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:26 +00:00
Andrew Boie 73abd32a7d kernel: expose struct k_thread implementation
Historically, space for struct k_thread was always carved out of the
thread's stack region. However, we want more control on where this data
will reside; in memory protection scenarios the stack may only be used
for actual stack data and nothing else.

On some platforms (particularly ARM), including kernel_arch_data.h from
the toplevel kernel.h exposes intractable circular dependency issues.
We create a new per-arch header "kernel_arch_thread.h" with very limited
scope; it only defines the three data structures necessary to instantiate
the arch-specific bits of a struct k_thread.

Change-Id: I3a55b4ed4270512e58cf671f327bb033ad7f4a4f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-26 16:29:06 +00:00
Vincenzo Frascino dfed8c4874 kernel: Add stack_info to k_thread
This patck adds the stack information into the k_thread data structure.
The information will be set by when creating a new thread (_new_thread)
and will be used by the scheduling process.

Change-Id: Ibe79fe92a9ef8bce27bf8616d8e0c878508c267d
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
2017-04-25 16:02:38 +00:00
Leandro Pereira ffe74b45fa kernel: Add thread events to kernel event logger
This adds a new event type to the kernel event logger that tracks
thread-related events: being added to the ready queue, pending a
thread, and exiting a thread.

It's the only event type that contains "subevents" and thus has a
non-void parameter in their respective _sys_k_event_logger_*()
function.  Luckily, as isn't the case with other events (such as IRQs
and thread switching), these functions are called from
platform-agnostic places, so there's no need to worry about changing
the assembly guts.

This is the first patch in a series adding support for better real-time
profiling of Zephyr applications.

Jira: ZEP-1463
Change-Id: I6d63607ba347f7a9cac3d016fef8f5a0a830e267
Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2017-04-25 02:16:36 +00:00
Kumar Gala 96ee45df8d kernel: refactor thread_monitor_init into common code
We do the same thing on all arch's right now for thread_monitor_init so
lets put it in a common place.  This also should fix an issue on xtensa
when thread monitor can be enabled (reference to _nanokernel.threads).

Change-Id: If2f26c1578aa1f18565a530de4880ae7bd5a0da2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:42 +00:00
Kumar Gala b8823c4efd kernel: Refactor common _new_thread init code
We do a bit of the same stuff on all the arch's to setup a new thread.
So lets put that code in a common place so we unify it for everyone and
reduce some duplicated code.

Change-Id: Ic04121bfd6846aece16aa7ffd4382bdcdb6136e3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:42 +00:00
Kumar Gala 5742a508a2 kernel: cleanup use of naked unsigned in _new_thread
There are a few places that we used an naked unsigned type, lets be
explicit and make it 'unsigned int'.

Change-Id: I33fcbdec4a6a1c0b1a2defb9a5844d282d02d80e
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:41 +00:00
Kumar Gala cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Kumar Gala 34a57db844 Revert "kernel: Convert formatter strings to use PRI defines"
This reverts commit 7b9dc107a8.

We revert this as we intent to move away from {u}int{8,16,32,64}_t types
to our own internal types for sized variables so we shouldn't need the
PRI macros anymore.

Change-Id: I1d9d797fee47ca266867ae65656c150f8fe2adb2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-19 10:50:51 -05:00
Anas Nashif 306e15e0a1 kernel: remove legacy kernel support
Change-Id: Iac1e21677d74f81a93cd29d64cce261676ae78a6
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 15:48:37 +00:00
Kumar Gala 7b9dc107a8 kernel: Convert formatter strings to use PRI defines
To allow for various libc implementations (like newlib) in which the way
various {u}int{8,16,32}_t types are defined vary between both libc
implementations and across architectures we need to utilize the PRI
defines.

Change-Id: Ie884fb67015502288152ecbd64c37961a4f538e4
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-17 11:09:36 -05:00
Benjamin Walsh 5d35dba73d kernel/timeouts: add description of timeouts queued on the same tick
Change-Id: I24ba889e3174b903ccea5309ad45e2b4d1755fe1
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:25 +00:00
Benjamin Walsh cf93743f50 kernel/sched: refactor _get_first_thread_to_unpend()
Modify _get_first_thread_to_unpend() so that it does not remove the
thread from the wait queue. Rename it to _find_first_thread_to_unpend()
to match the new behaviour.

This will be needed to fix a semaphore group bug.

Change-Id: I1b7531c3beecf3b6a86ecf88a93a02449edd0767
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:24 +00:00
Benjamin Walsh c1405a7d6b kernel/sched: add _is_thread_dummy()
Rather than explicitely checking the thread state bit.

Change-Id: Ic78427d9847e627a0e91d0147d3b6164450597f6
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:24 +00:00
Benjamin Walsh 8d7c274e55 kernel/sched: protect thread sched_lock with compiler barriers
This has not bitten us yet, but it was a ticking timebomb.

This is similar to the issue that was found with irq_lock/irq_unlock
implementations on several architectures. Having a volatile variable is
not the way to force the sched_lock variable to be
incremented/decremented around the accesses to data it protects.
Instead, a compiler barrier must prevent the compiler from reordering
the memory accesses around setting of sched_lock. Needed in the inline
implementations _sched_lock()/_sched_unlock_no_reschedule(), which
resolve to simple decrement/increment of the per-thread sched_lock
variable.

Change-Id: I06f5b3524889f193efe69caa947118404b1be0b5
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-16 04:56:21 +00:00
Luiz Augusto von Dentz 41921dd5b9 kernel: Use SYS_DLIST_FOR_EACH_CONTAINER
Change-Id: I4cbb12af487217cfcb78969ec88a8e4c06eca27f
Signed-off-by: Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
2017-02-10 16:16:14 +00:00
Benjamin Walsh fcdb0fd6ea kernel: add _WAIT_Q_INIT()
Dissociate wait queue initialization from doubly-linked lists if the
underlying implementation is to be abstracted.

Change-Id: Id7544c6ac506643437f9c4f0ae97e7eecab8d06d
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:30:00 +00:00
Benjamin Walsh 0de9487351 kernel: add _THREAD_POLLING thread state
Will be needed for k_poll() API.

Change-Id: I0ebe4be5a9c56df2ebb8496dc49c894e982e6008
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:29:59 +00:00
Benjamin Walsh 0a49ba38b8 kernel: add _is_thread_state_set()
Change-Id: I2b6a51c23997afeb5252a3632172156ba96252ce
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-02-02 00:29:58 +00:00
Benjamin Walsh ed240f2796 kernel/arch: streamline thread user options
The K_<thread option> flags/options avaialble to users were hidden in
the kernel private header files: move them to include/kernel.h to
publicize them.

Also, to avoid any future confusion, rename the k_thread.execution_flags
field to user_options.

Change-Id: I65a6fd5e9e78d4ccf783f3304b607a1e6956aeac
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:50 +00:00
Benjamin Walsh 867f8ee371 kernel: move K_ESSENTIAL from thread_state to execution_flags
The execution_flags will store the user-facing states of a thread.

This also fixes a bug where K_ESSENTIAL was already assigned to
execution_flags via the options field of
k_thread_spawn()/K_THREAD_DEFINE().

Change-Id: I91ad7a62b5d180e09eead8985ff519809959ecf2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
Benjamin Walsh a8978aba8f kernel: rename thread states symbols
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.

Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
Benjamin Walsh 3f3f4d94d5 kernel: remove K_STATIC
Unused.

Reuse bit for K_FP_REGS to keep the used bits the lowest possible.

Change-Id: I5998801ef34156271d4f66d1948a05e0b2ce58f7
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:48 +00:00
Benjamin Walsh dfa7ce5c94 kernel: include kernel.h in kernel_structs.h in asm files
This will be needed for some thread user options that will move to
kernel.h since they are part of the user API.

Change-Id: I46e302b6cafcdddbad3458134b98feb5b8d45d9b
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:48 +00:00
Benjamin Walsh a5d8461d74 kernel: move volatile from k_thread.prio to k_thread.sched_locked
When prio and sched_locked were moved into a struct together to create a
union with the combined preempt field, the volatile qualifier moved from
sched_locked to prio by mistake.

Change-Id: I5a8e01324f14e77e3d7162c12515471826023633
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:47 +00:00
David B. Kinder ac74d8b652 license: Replace Apache boilerplate with SPDX tag
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.

Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.

Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file.  Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.

Jira: ZEP-1457

Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-19 03:50:58 +00:00
Benjamin Walsh 2f280416e6 kernel: fix total number of coop prios in coop-only mode
The idle priority was not accounted for.

With this change, the philosophers demo runs in coop-only mode.

Change-Id: I23db33687bcf3b2107d5fc07977143730f62e476
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-17 12:17:27 +00:00
Benjamin Walsh 168695c7ef kernel/arch: inspect prio/sched_locked together for preemptibility
These two fields in the thread structure control the preemptibility of a
thread.

sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.

By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.

Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:25 +00:00
Benjamin Walsh f955476559 kernel/arch: optimize memory use of some thread fields
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.

- prio can fit in one byte, limiting the priorities range to -128 to 127

- recursive scheduler locking can be limited to 255; a rollover results
  most probably from a logic error

- flags are split into execution flags and thread states; 8 bits is
  enough for each of them currently, with at worst two states and four
  flags to spare (on x86, on other archs, there are six flags to spare)

Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.

Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:24 +00:00
Anas Nashif f6e039062a kernel: remove dependency on CONFIG_NANO_TIMERS/TIMEOUTS
Remove legacy option and use SYS_CLOCK_EXISTS where appropriate.

Change-Id: I3d524ea2776e638683f0196c0cc342359d5d810f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-01-08 18:09:52 +00:00
Benjamin Walsh 66b99f1486 kernel: add _timeout_q dump before and after adding timeout
Kernel debugging aid.

Change-Id: I852ba2f626f133d943be2ecac41354fecca478d6
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:27 +00:00
Benjamin Walsh 99eef25815 kernel: do not use sys_dlist_insert_at() in _add_timeout()
Similar to _pend_queue, it's more efficient to do the logic inline.

Change-Id: I68ac4fbc26c97b6ec9322caef98504ff6ccc8727
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:26 +00:00
Benjamin Walsh d779f3d240 kernel/arch: streamline thread flag bits used
Use least significant bits for common flags and high bits for
arch-specific ones.

Change-Id: I982719de4a24d3588c19a0d30bbe7a27d9a99f13
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh e6a69cae54 kernel/arch: reverse polarity on sched_locked
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.

Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Benjamin Walsh 04ed860c68 kernel: make _thread.sched_locked a non-atomic operator variable
Not needed, since only the thread itself can modifiy its own
sched_locked count.

Change-Id: I3d3d8be548d2b24ca14f51637cc58bda66f8b9ee
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:23 +00:00
Benjamin Walsh 6209218f40 kernel: optimize ms-to-ticks for certain tick frequencies
Some tick frequencies lend themselves to optimized conversions from ms
to ticks and vice-versa.

- 1000Hz which does not need any conversion
- 500Hz, 250Hz, 125Hz where the division/multiplication are a straight
  shift since they are power-of-two factors of 1000.

In addition, some more generally used values are made to use optimized
conversion equations rather than the generic one that uses 64-bit math,
and often results in calling compiler intrinsics.

These values are: 100Hz, 50Hz, 25Hz, 20Hz, 10Hz, 1Hz (the last one used
in some testing).

Avoiding the 64-bit math intrisics has the additional benefit, in
addition to increased performance, of using a significant lower amount
of stack space: 52 bytes on ARM Cortex-M and 80 bytes on x86.

Change-Id: I080eb338a2637d6b1c6838c119af1a9fa37fe869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:07 +00:00
Benjamin Walsh eec37e6752 kernel: add flag that tells the system is handling timeouts
This limits the execution contexts that will go over the loop in
_unpend_first_thread() to only ISRs of very high priority that are
preempting the system clock timer ISR, and only during the time it is
handling timeouts.

Change-Id: Iaf0500d28a2de5e077c9cf9861a5a70244127d58
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:05 +00:00
Anas Nashif dc3d73bf58 kernel: fix all nanokernel usage in comments
Also include kernel.h instead of nanokernel.h

Change-Id: I65dc5e31b5409b809397296817e2d5e7adf28892
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-21 18:45:03 +00:00
Anas Nashif d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00