Commit graph

267 commits

Author SHA1 Message Date
Rajavardhan Gundi
d4dd928eaa kernel/stack: Introduce K_THREAD_STACK_LEN macro
This is a public macro which calculates the size to be allocated for
stacks inside a stack array. This is necessitated because of some
internal padding (e.g. for MPU scenarios). This is particularly
useful when a reference to K_THREAD_STACK_ARRAY_DEFINE needs to be
made from within a struct.

Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
2018-07-03 08:44:09 -07:00
Ioannis Glaropoulos
92b8a41f20 include: create kernel_includes.h header to hold kernel includes
This commit creates a new header file (kernel_include.h) that
contains all header files to be included by kernel_init.h.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-06-21 22:28:00 +02:00
Andy Ross
55a7e46b66 kernel/poll: Remove POLLING thread state bit
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state".  It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.

Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll.  The disadvantage is the 4
bytes of thread space needed.  Advantages:

+ Cleaner API, it's now internal to poll instead of being globally
  visible.

+ The thread_state bit space is just one byte, and was almost full
  already.

+ Smaller code to write/test a full word and not a bitfield

+ Words are atomic, so no need for one of irq lock/unlock pairs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:25:38 -04:00
Andrew Boie
2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Leandro Pereira
0e23ad889e kernel: k_work: k_work_init() should initialize all fields
k_work_init() was not initializing all fields in the k_work struct.

Mainly, the atomic_clear_bit() function call was reading a possibly
uninitialized value, clearing a bit, and assigning it back to the
`flags` member.  The `_reserved` member was never initialized.

With the struct now initialized with the _K_WORK_INITIALIZER() macro,
initialization is consistent regardless of how a `struct k_work` is
initialized.

This fixes the Valgrind issues found in #7478.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-06-05 10:26:59 -04:00
Andrew Boie
b85ac3e58f kernel: clarify thread->stack_info documentation
ARC/ARM are not properly doing this at the moment but this will
be corrected in later patches.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-02 16:29:46 -04:00
Andrew Boie
e2d779159f kernel: update stack macro documentation
It's not possible to enforce that K_THREAD_STACK_SIZEOF()
returns the original number passed to K_THREAD_STACK_DEFINE().
Some arches need to round this number up in order to satisfy
alignment constraints.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-02 16:29:46 -04:00
Anas Nashif
47420d04f0 doc: add requirement IDs
Add requirement IDs for traceability.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Anas Nashif
a541e93d9a doc: document thread options
Add doxygen documentation to thread options.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Anas Nashif
ce78d16b73 doc: document kernel APIs with doxygen
Document a few structs and cleanup.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Andrew Boie
df55524d6a userspace: align _k_object to 4 bytes
We want the struct to be packed to conserve space, but the
perms field needs to always be on a 4-byte boundary since
we do bitfield operations on it, arches like ARC require
that the sys_bitfield_* operations be aligned to a 4 byte
boundary.

Instances of struct _k_object will now be 4-byte aligned
if in an array (which they are), even though the members
are still packed.

Fixes: #7776

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-25 13:25:04 -07:00
Anas Nashif
c8e0d0cebc kernel: add requirement Ids to implementation
Add requirement ID place holders based on APIS. The requirements will
appear as a list in doxygen documentation. The IDs will be expanded with
more details somewhere else, probably a requirement catalog on GH or
some other requirement management tool. This is still TBD.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-23 18:58:03 -04:00
David B. Kinder
fcbd8fb631 doc: fix misspellings in API doxygen comments
Found some misspellings missed during normal code reviews

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-05-23 15:28:01 -05:00
Andy Ross
4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross
1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross
ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie
3772f77119 k_poll: expose to user mode
k_poll is now accessible from user mode. A memory allocation takes place
from the caller's resource pool to copy the provided poll_events
array; this can be large enough to make allocating it on the stack
not preferable.

k_poll_signal are now proper kernel objects. Two APIs have been added,
one to reset the signaled state and one to check the current signaled
state and result value.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie
2b9b4b2cf7 k_queue: allow user mode access via allocators
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.

FIFO/LIFOs are derived from k_queues and have had allocator functions
added.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Kumar Gala
85699f7c6f kernel: Fix compile warning with _impl_k_object_alloc
We get the following warning with CONFIG_DYNAMIC_OBJECTS=n in
_impl_k_object_alloc:

include/kernel.h:322:57: warning: unused parameter ‘otype’ [-Wunused-parameter]
 static inline void *_impl_k_object_alloc(enum k_objects otype)
                                                         ^~~~~
Simple fix is to ARG_UNUSED otype.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-05-17 13:06:48 -05:00
Andrew Boie
f3bee951b1 kernel: stacks: add k_stack_alloc() init
Similar to what has been done with pipes and message queues,
user mode can't be trusted to provide a buffer for the kernel
to use. Remove k_stack_init() as a syscall and offer
k_stack_alloc_init() which allocates a buffer from the caller's
resource pool.

Fixes #7285

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
0fe789ff2e kernel: add k_msgq_alloc_init()
User mode can't be trusted to provide a memory buffer to
k_msgq_init(). Introduce k_msgq_alloc_init() which allocates
the buffer out of the calling thread's resource pool and expose
that as a system call instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
44fe81228d kernel: pipes: add k_pipe_alloc_init()
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.

K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
97bf001f11 userspace: get dynamic objs from thread rsrc pools
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.

Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.

A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
e9cfc54d00 kernel: remove k_object_access_revoke() as syscall
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.

We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.

However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
a2480bd472 mempool: add API for malloc semantics
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.

Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Ramakrishna Pallala
149a3296ab kernel: Deprecate k_call_stacks_analyze() API
Deprecated k_call_stacks_analyze() API as it is only
dumping (printing) the statically defined main, idle,
work and ISR stacks.

Use k_thread_foreach() API which is a generic API
to iterate over threads.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Ramakrishna Pallala
110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andy Ross
15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Rajavardhan Gundi
68040c8d78 kernel: sem: Modify the way BUILD_ASSERT is used
BUILD_ASSERT() macro makes use of __COUNTER__ which may not be
supported in some compilers (like xcc). So, multiple uses of
BUILD_ASSERT() in same scope is not possible for such compilers.
Instead, the expression to BUILD_ASSERT can be "&&"ed to achieve
the same purpose.

Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
2018-04-30 16:46:14 -04:00
Leandro Pereira
c200367b68 drivers: Perform a runtime check if a driver is capable of an operation
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.

Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.

Fixes #6907.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-26 02:57:12 +05:30
Andrew Boie
31bdfc014e userspace: add support for dynamic kernel objects
A red-black tree is maintained containing the metadata for all
dynamically created kernel objects, which are allocated out of the
system heap.

Currently, k_object_alloc() and k_object_free() are supervisor-only.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-24 12:27:54 -07:00
Leandro Pereira
f5f95ee3a9 kernel: sem: Ensure that initial count is lesser or equal than limit
Ensure this value during static initialization (with build assertions),
and dynamic initializations through system calls.

If initial count is larger than the limit, it's possible for the count
to wraparound, causing locking issues.

Expanding the BUILD_ASSERT() macros after declaring a k_sem struct in
K_SEM_DEFINE() is necessary to support cases where a semaphore is
defined statically.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-24 04:04:36 +05:30
Andy Ross
3f55dafebc kernel: Deprecate k_thread_cancel() API
The only difference between this call and k_thread_abort() (beyond
some minor performance deltas) is that "cancel" will act as a noop in
cases where the thread has begun execution and will return an error.
"Abort" always succeeds, of course.  That is inherently racy when used
as a "stop the thread" API: there's no way in general (or at all in
SMP situations) to know that you're calling this function "early
enough" to catch the thread before it starts.

Effectively, all k_thread_cancel() gives you that k_thread_abort()
doesn't is an indication about whether or not a thread has started.
There are many other ways to get that information that don't require
dangerous kernel APIs.

Deprecate this function.  Zephyr's own code never used it except for
its own unit test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
Andy Ross
8606fabf74 kernel: Scheduler refactoring: use _reschedule_*() always
There was a somewhat promiscuous pattern in the kernel where IPC
mechanisms would do something that might effect the current thread
choice, then check _must_switch_threads() (or occasionally
__must_switch_threads -- don't ask, the distinction is being replaced
by real English words), sometimes _is_in_isr() (but not always, even
in contexts where that looks like it would be a mistake), and then
call _Swap() if everything is OK, otherwise releasing the irq_lock().
Sometimes this was done directly, sometimes via the inverted test,
sometimes (poll, heh) by doing the test when the thread state was
modified and then needlessly passing the result up the call stack to
the point of the _Swap().

And some places were just calling _reschedule_threads(), which did all
this already.

Unify all this madness.  The old _reschedule_threads() function has
split into two variants: _reschedule_yield() and
_reschedule_noyield().  The latter is the "normal" one that respects
the cooperative priority of the current thread (i.e. it won't switch
out even if there is a higher priority thread ready -- the current
thread has to pend itself first), the former is used in the handful of
places where code was doing a swap unconditionally, just to preserve
precise behavior across the refactor.  I'm not at all convinced it
should exist...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30
5f67a6119d include: improve compatibility with C++ apps.
This patch lets a C++ application use more of Zephyr by adding guards
and changeing some constructs to the C++11 equivalent.

Changes include:

- Adding guards
- Switching to static_assert
- Switching to a template for ARRAY_SIZE as g++ doesn't have the
  builtin.
- Re-ordering designated initialisers to match the struct field order
  as G++ only supports simple designated initialisers.

Signed-off-by: Michael Hope <mlhx@google.com>
2018-04-09 23:21:52 -04:00
David B. Kinder
3314c3675f doc: misspellings in public API doxygen comments
occasional spelling-check pass found some misspellings

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-04-05 19:16:24 -04:00
Andrew Boie
aa6de29c4b lib: user mode compatible mempools
We would like to offer the capability to have memory pool heap data
structures that are usable from user mode threads. The current
k_mem_pool implementation uses IRQ locking and system-wide membership
lists that make it incompatible with user mode constraints.

However, much of the existing memory pool code can be abstracted to some
common functions that are used by both k_mem_pool and the new
sys_mem_pool implementations.

The sys_mem_pool implementation has the following differences:

* The alloc/free APIs work directly with pointers, no internal memory
block structures are exposed to the end user. A pointer to the source
pool is provided for allocation, but freeing memory just requires the
pointer and nothing else.

* k_mem_pool uses IRQ locks and required very fine-grained locking in
order to not affect system latency. sys_mem_pools just use a semaphore
to protect the pool data structures at the API level, since there aren't
implications for system responsiveness with this kind of concurrency
control.

* sys_mem_pools do not support the notion of timeouts for requesting
memory.

* sys_mem_pools are specified at compile time with macros, just like
kernel memory pools. Alternative forms of specification at runtime
will be a later enhancement.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-05 07:03:05 -07:00
Youvedeep Singh
188c1ab5ca kernel: msg_q: Add routine to fetch basic attrs from message queue.
For posix layer implementation of message queue, we need to fetch
basic attributes of message queue. Currently this routine is not
present in Zephyr. So adding this routing into message queue.

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2018-04-03 15:30:44 -04:00
Ramakrishna Pallala
2b8cf4c98e include: kernel: Fix documentation for TICKLESS_KERNEL API's
Fix documentation scope for TICKLESS_KERNEL API's.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-03-30 12:15:15 -04:00
Ramakrishna Pallala
92489ea4dd include: kernel: Fix typo in fifo API description
Fix typo FIF -> FIFO in API description of k_fifo_peek_head

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-03-30 12:15:15 -04:00
Anas Nashif
954d550364 kernel: api: mark internal functions as such
Add @internal doxygen command to mark internal functions.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Anas Nashif
585fd1faec doc: kernel: capitalize Fifo/Lifo
Capitalise Fifo and Lifo in documentation, those are acronyms and need
to be all in caps.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Anas Nashif
166f5194ae kernel: api: fix doxygen group ending
Extra comments after @} were showing up in the next section details.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-20 14:01:30 -04:00
Adithya Baglody
3a6d72ecde kernel: mem_domain: k_mem_partition is now placed in kernel memory.
The k_mem_partition structs need to be placed in the kernel memory.
This patch ensures that these structs are placed correctly.
Also when a struct k_mem_domain is declared it is advised to add
__kernel.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-03-20 09:19:59 -07:00
Leandro Pereira
08de658eb9 kernel: mem_domain: Use u8_t for number of partitions in struct
During system initialization, the global static variable (to
mem_domain.c) is initialized with the number of maximum partitions per
domain.  This variable is of u8_t type.

Assertions throughout the code will check ranges and test for overflow
by relying on implicit type conversion.

Use an u8_t instead of u32_t to avoid doubts.  Also, reorder the
k_mem_partition struct to remove the alignment hole created by reducing
sizeof(num_partitions).

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-02 07:08:49 +01:00
Andy Ross
2724fd11cb kernel: SMP-aware scheduler
The scheduler needs a few tweaks to work in SMP mode:

1. The "cache" field just doesn't work.  With more than one CPU,
   caching the highest priority thread isn't useful as you may need N
   of them at any given time before another thread is returned to the
   scheduler.  You could recalculate it at every change, but that
   provides no performance benefit.  Remove.

2. The "bitmask" designed to prevent the need to individually check
   priorities is likewise dropped.  This could work, but in fact on
   our only current SMP system and with current K_NUM_PRIOPRITIES
   values it provides no real benefit.

3. The individual threads now have a "current cpu" and "active" flag
   so that the choice of the next thread to run can correctly skip
   threads that are active on other CPUs.

The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.

Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API.  This should be a very
early candidate for lock granularity attention!

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross
e717267abf kernel, esp32: Add _arch_start_cpu API
This is a mostly-internal API to start a secondary system CPU, with an
implementation for the ESP-32 "APP" cpu.  Exposed in kernel.h because
it's plausibly useful for asymmetric MP code managed by an app.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross
042d8ecca9 kernel: Add alternative _arch_switch context switch primitive
The existing __swap() mechanism is too high level for some
applications because of its scheduler-awareness.  This introduces a
new _arch_switch() mechanism, which is a simpler primitive that looks
like:

    void _arch_switch(void *handle, void **old_handle_out);

The new thread handle (typically just a stack pointer) is specified
explicitly instead of being picked up from the scheduler by
per-architecture code, and on return the "old" thread handle that got
switched out is returned through the pointer.

The new primitive (currently available only on xtensa) is selected
when CONFIG_USE_SWITCH is "y".  A new C _Swap() implementation based
on this primitive is then added which operates compatibly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Andy Ross
03c1d28e6e work_q: Correctly clear pending flag in delayed work queue, update docs
As discovered in https://github.com/zephyrproject-rtos/zephyr/issues/5952

...a duplicate call to k_delayed_work_submit_to_queue() on a work item
whose timeout had expired but which had not yet executed (i.e. it was
pending in the queue for the active work queue thread) would fail,
because the cancellation step wouldn't clear the PENDING bit, causing
the resubmission to see the object in an invalid state.  Trivially
fixed by adding a bit clear.

It also turns out that the behavior of the code doesn't match the
docs, which state that a PENDING work item is not supposed to be
cancelled at all.  Fix the docs to remove that.

And on yet further review, it turns out that there's no way to make a
test like the one in the linked bug threadsafe.  The work queue does
no synchronization by design, so if the user code does no external
synchronization it might very well clobber the running handler.  Added
a sentence to the docs to reflect this gotcha.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-13 18:08:57 -05:00