Commit graph

483 commits

Author SHA1 Message Date
Flavio Ceolin
67ca176754 headers: Fix headers across the project
Any word started with underscore followed by and uppercase letter or a
second underscore is a reserved word according with C99.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-17 15:49:26 -04:00
Vinayak Kariappa Chettimada
c7d2734455 kernel: Improve precision of ticks and ms conversions
The following 2 improvements are contained in this patch:

- When converting from ms to ticks, instead of using hardware cycles
  per tick, use hardware cycles per second. This ensures that the
  multiplication is done before the division, increasing precision.
- When converting from ticks to ms, instead of using cycles per tick
  and cycles per sec, use ticks per sec. This too increases the
  precision.

The concept is to make the dividend as large as possible compared to the
divisor in order to lose as little precision as possible.

Fixes #8898
Fixes #9459
Fixes #9466
Fixes #9468

Signed-off-by: Vinayak Kariappa Chettimada <vich@nordicsemi.no>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2018-08-31 11:14:39 -04:00
Paul Sokolovsky
45c0b20470 kernel: k_poll: Introduce separate status for cancelled events
Previously (as introduced in 48fadfe62), if k_poll() waited on a
queue (or subclass like fifo), and wait was cancelled on queue's
side using k_queue_cancel_wait(), k_poll returned -EINTR. But it
did not set event->state field (to anything else but
K_POLL_STATE_NOT_READY), so in case of waiting on multiple queues,
it was not possible to differentiate which of them was cancelled.

This in particular broke detection of network socket EOF conditions
in POSIX poll() implementation.

This situation is now resolved with introduction of explicit
K_POLL_STATE_CANCELLED state, which is now set for cancelled queue
(-EINTR return remains the same).

This change also elaborates docstring for the functions mentioned, to
document this behavior.

Fixes: #9032

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2018-08-30 09:28:29 -04:00
Anas Nashif
b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Daniel Leung
fc182430c0 kernel: userspace: reserve stack space to store local data
This enables reserving little space on the top of stack to store
data local to thread when CONFIG_USERSPACE. The first customer
of this is errno.

Note that ARC, due to how it lays out the user stack and
privilege stack, sets the pointer itself rather than
relying on the common way.

Fixes: #9067

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2018-08-17 09:40:52 -07:00
Flavio Ceolin
8aec087268 kernel: Fix bitwise operators with unsigned operators
Bitwise operators should be used only with unsigned integer operands
because the result os bitwise operations on signed integers are
implementation-defined.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Flavio Ceolin
f23a8cdd2d kernel: Fix k_*_sys_clock_always_on macro
Commit 2b8cf4c98e ("include: kernel: Fix documentation for
TICKLESS_KERNEL API's")' defined a macro to fix documentation when
TKCKLESS_KERNEL is not available but this macro does not return the
same the functions returns, so its use may result in compilation
error.

Another point to consider is that if one is using this function
without it be enabled is better to return a proper error like ENOTSUP
explicitly saying that this is not supported.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Piotr Zięcik
3c7f990367 kernel: Do not use sys_clock_ticks_per_sec in _ms_to_ticks()
The value of sys_clock_ticks_per_sec is obtained using
simple integer division with rounding toward zero. As result
using this variable in _ms_to_ticks() introduces some error.

This commit eliminates sys_clock_ticks_per_sec from equation
used in _ms_to_ticks() removing introduced error.

Also, this commit fixes #8895.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-08-14 07:18:44 -07:00
Kumar Gala
8777ff1304 Fix compile errors related to errno.h
Because errno.h is defined in terms of a syscall we can get into trouble
when one syscall/<FOO.h> ends up include another syscall/<BAR.h>.

Moving errno.h from kernel_includes.h to kernel.h breaks the possible
inclusion issue on some ARM platforms (which arm_mpu.h ends up include
soc.h which ends up include kernel_includes.h which would include
errno.h).

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-07-25 23:38:13 -04:00
Piotr Zięcik
96aa0d2133 kernel: Use accurate tick/ms conversion if clock rate is set at runtime
This commit enables accurate (based on 64-bit math) tick <-> ms
conversion if system clock rate is determined at runtime.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-20 10:17:47 -04:00
Andrew Boie
7f4d006959 kernel: fix errno access for user mode
The errno "variable" is required to be thread-specific.
It gets defined to a macro which dereferences a pointer
returned by a kernel function.

In user mode, we cannot simply read/write the thread struct.
We do not have thread-local storage mechanism, so for now
use the lowest address of the thread stack to store this
value, since this is guaranteed to be read/writable by
a user thread.

The downside of this approach is potential stack corruption
if the stack pointer goes down this far but does not exceed
the location, since a fault won't be generated in this case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-19 16:44:59 -07:00
Piotr Zięcik
77f42f8312 kernel: Move _ms_to_ticks() and __ticks_to_ms() close to each other.
This commit moves the _ms_to_ticks() and __ticks_to_ms() functions
close to each other in order to improve code readability.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik
91fe22ec7d kernel: Improve tick <-> ms conversion.
The kernel incorrectly assumed, that system timer frequency is always
divisible without remainder by couple "natural" tick rates (like 100).
As result on some SoCs, time calculations was not correct, producing
strange effects (invalid sleep times, incorrect k_uptime_get() etc.).

This commit enables accurate, but costly (using 64-bit math) tick <-> ms
conversion if the selected tick interval is not exact due to hardware
limitations.

Also, this commit fixes tests in which removed _ms_per_tick were used.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik
e995c27b42 kernel: Do not use fixed list of "good" sys_clock_ticks_per_sec values.
This commit removes fixed list of "good" sys_clock_ticks_per_sec values
which usage results in integer _ms_per_tick value.

Instead of using the list, simply check if MSEC_PER_SEC could be divided
without remainder by sys_clock_ticks_per_sec.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Piotr Zięcik
fe2ac39bf2 kernel: Cleanup _ms_to_ticks().
This commit moves all implementations of the _ms_to_ticks() into
single file. Also, the function is now inline even if
_NEED_PRECISE_TICK_MS_CONVERSION is defined.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2018-07-03 22:46:39 -04:00
Andy Ross
225c74bbdf kernel/Kconfig: Reorgnize wait_q and sched algorithm choices
Make these "choice" items instead of a single boolean that implies the
element unset.

Also renames WAITQ_FAST to WAITQ_SCALABLE, as the rbtree is really
only "fast" for large queue sizes (it's constant factor overhead is
bigger than a list's!)

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-07-03 17:09:15 -04:00
Rajavardhan Gundi
d4dd928eaa kernel/stack: Introduce K_THREAD_STACK_LEN macro
This is a public macro which calculates the size to be allocated for
stacks inside a stack array. This is necessitated because of some
internal padding (e.g. for MPU scenarios). This is particularly
useful when a reference to K_THREAD_STACK_ARRAY_DEFINE needs to be
made from within a struct.

Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
2018-07-03 08:44:09 -07:00
Ioannis Glaropoulos
92b8a41f20 include: create kernel_includes.h header to hold kernel includes
This commit creates a new header file (kernel_include.h) that
contains all header files to be included by kernel_init.h.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-06-21 22:28:00 +02:00
Andy Ross
55a7e46b66 kernel/poll: Remove POLLING thread state bit
The _THREAD_POLLING bit in thread_state was never actually a
legitimate thread "state".  It is a clever synchronization trick
introduced to allow the thread to release the irq_lock while looping
over the input event array without dropping events.

Instead, make that flag a word in the "poller" struct that lives on
the stack of the thread calling k_poll.  The disadvantage is the 4
bytes of thread space needed.  Advantages:

+ Cleaner API, it's now internal to poll instead of being globally
  visible.

+ The thread_state bit space is just one byte, and was almost full
  already.

+ Smaller code to write/test a full word and not a bitfield

+ Words are atomic, so no need for one of irq lock/unlock pairs.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-06-11 17:25:38 -04:00
Andrew Boie
2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Leandro Pereira
0e23ad889e kernel: k_work: k_work_init() should initialize all fields
k_work_init() was not initializing all fields in the k_work struct.

Mainly, the atomic_clear_bit() function call was reading a possibly
uninitialized value, clearing a bit, and assigning it back to the
`flags` member.  The `_reserved` member was never initialized.

With the struct now initialized with the _K_WORK_INITIALIZER() macro,
initialization is consistent regardless of how a `struct k_work` is
initialized.

This fixes the Valgrind issues found in #7478.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-06-05 10:26:59 -04:00
Andrew Boie
b85ac3e58f kernel: clarify thread->stack_info documentation
ARC/ARM are not properly doing this at the moment but this will
be corrected in later patches.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-02 16:29:46 -04:00
Andrew Boie
e2d779159f kernel: update stack macro documentation
It's not possible to enforce that K_THREAD_STACK_SIZEOF()
returns the original number passed to K_THREAD_STACK_DEFINE().
Some arches need to round this number up in order to satisfy
alignment constraints.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-02 16:29:46 -04:00
Anas Nashif
47420d04f0 doc: add requirement IDs
Add requirement IDs for traceability.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Anas Nashif
a541e93d9a doc: document thread options
Add doxygen documentation to thread options.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Anas Nashif
ce78d16b73 doc: document kernel APIs with doxygen
Document a few structs and cleanup.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-26 09:16:42 -04:00
Andrew Boie
df55524d6a userspace: align _k_object to 4 bytes
We want the struct to be packed to conserve space, but the
perms field needs to always be on a 4-byte boundary since
we do bitfield operations on it, arches like ARC require
that the sys_bitfield_* operations be aligned to a 4 byte
boundary.

Instances of struct _k_object will now be 4-byte aligned
if in an array (which they are), even though the members
are still packed.

Fixes: #7776

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-25 13:25:04 -07:00
Anas Nashif
c8e0d0cebc kernel: add requirement Ids to implementation
Add requirement ID place holders based on APIS. The requirements will
appear as a list in doxygen documentation. The IDs will be expanded with
more details somewhere else, probably a requirement catalog on GH or
some other requirement management tool. This is still TBD.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-05-23 18:58:03 -04:00
David B. Kinder
fcbd8fb631 doc: fix misspellings in API doxygen comments
Found some misspellings missed during normal code reviews

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2018-05-23 15:28:01 -05:00
Andy Ross
4a2e50f6b0 kernel: Earliest-deadline-first scheduling policy
Very simple implementation of deadline scheduling.  Works by storing a
single word in each thread containing a deadline, setting it (as a
delta from "now") via a single new API call, and using it as extra
input to the existing thread priority comparison function when
priorities are equal.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-23 14:25:52 -04:00
Andy Ross
1acd8c2996 kernel: Scheduler rewrite
This replaces the existing scheduler (but not priority handling)
implementation with a somewhat simpler one.  Behavior as to thread
selection does not change.  New features:

+ Unifies SMP and uniprocessing selection code (with the sole
  exception of the "cache" trick not being possible in SMP).

+ The old static multi-queue implementation is gone and has been
  replaced with a build-time choice of either a "dumb" list
  implementation (faster and significantly smaller for apps with only
  a few threads) or a balanced tree queue which scales well to
  arbitrary numbers of threads and priority levels.  This is
  controlled via the CONFIG_SCHED_DUMB kconfig variable.

+ The balanced tree implementation is usable symmetrically for the
  wait_q abstraction, fixing a scalability glitch Zephyr had when many
  threads were waiting on a single object.  This can be selected via
  CONFIG_WAITQ_FAST.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-19 07:00:55 +03:00
Andy Ross
ccf3bf7ed3 kernel: Fix sloppy wait queue API
There were multiple spots where code was using the _wait_q_t
abstraction as a synonym for a dlist and doing direct list management
on them with the dlist APIs.  Refactor _wait_q_t into a proper opaque
struct (not a typedef for sys_dlist_t) and write a simple wrapper API
for the existing usages.  Now replacement of wait_q with a different
data structure is much cleaner.

Note that there were some SYS_DLIST_FOR_EACH_SAFE loops in mailbox.c
that got replaced by the normal/non-safe macro.  While these loops do
mutate the list in the code body, they always do an early return in
those circumstances instead of returning into the macro'd for() loop,
so the _SAFE usage was needless.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-18 01:48:48 +03:00
Andrew Boie
3772f77119 k_poll: expose to user mode
k_poll is now accessible from user mode. A memory allocation takes place
from the caller's resource pool to copy the provided poll_events
array; this can be large enough to make allocating it on the stack
not preferable.

k_poll_signal are now proper kernel objects. Two APIs have been added,
one to reset the signaled state and one to check the current signaled
state and result value.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Andrew Boie
2b9b4b2cf7 k_queue: allow user mode access via allocators
User mode may now use queue objects. Instead of embedding the kernel's
linked list information directly in the data item, a container struct
is allocated from the caller's resource pool which is then added to
the queue. The new sflist type is now used to store a flag indicating
whether a data item needs to be freed when removed from the queue.

FIFO/LIFOs are derived from k_queues and have had allocator functions
added.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-17 23:34:03 +03:00
Kumar Gala
85699f7c6f kernel: Fix compile warning with _impl_k_object_alloc
We get the following warning with CONFIG_DYNAMIC_OBJECTS=n in
_impl_k_object_alloc:

include/kernel.h:322:57: warning: unused parameter ‘otype’ [-Wunused-parameter]
 static inline void *_impl_k_object_alloc(enum k_objects otype)
                                                         ^~~~~
Simple fix is to ARG_UNUSED otype.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-05-17 13:06:48 -05:00
Andrew Boie
f3bee951b1 kernel: stacks: add k_stack_alloc() init
Similar to what has been done with pipes and message queues,
user mode can't be trusted to provide a buffer for the kernel
to use. Remove k_stack_init() as a syscall and offer
k_stack_alloc_init() which allocates a buffer from the caller's
resource pool.

Fixes #7285

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
0fe789ff2e kernel: add k_msgq_alloc_init()
User mode can't be trusted to provide a memory buffer to
k_msgq_init(). Introduce k_msgq_alloc_init() which allocates
the buffer out of the calling thread's resource pool and expose
that as a system call instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
44fe81228d kernel: pipes: add k_pipe_alloc_init()
User mode can't be trusted to provide the kernel buffers for
internal use. The syscall for k_pipe_init() has been removed
in favor of a new API to draw the buffer memory from the
calling thread's resource pool.

K_PIPE_DEFINE() now properly locates the allocated buffer into
kernel memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
97bf001f11 userspace: get dynamic objs from thread rsrc pools
Dynamic kernel objects no longer is hard-coded to use the kernel
heap. Instead, objects will now be drawn from the calling thread's
resource pool.

Since we now have a reference counting mechanism, if an object
loses all its references and it was dynamically allocated, it will
be automatically freed.

A parallel dlist is added for efficient iteration over the set of
all dynamic objects, allowing deletion during iteration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
92e5bd7473 kernel: internal APIs for thread resource pools
Some kernel APIs may need to allocate memory in order to function
correctly, especially if they are exposed to userspace where
buffers provided by user code cannot be trusted.

Instead of simply drawing from the system heap, specific pools
may instead be assigned to threads, and any requests made on
behalf of the calling thread will draw heap memory from that pool.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
e9cfc54d00 kernel: remove k_object_access_revoke() as syscall
Forthcoming patches will dual-purpose an object's permission
bitfield as also reference tracking for kernel objects, used to
handle automatic freeing of resources.

We do not want to allow user thread A to revoke thread B's access
to some object O if B is in the middle of an API call using O.

However we do want to allow threads to revoke their own access to
an object, so introduce a new API and syscall for that.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Andrew Boie
a2480bd472 mempool: add API for malloc semantics
This works like k_malloc() but allows the user to designate
a specific memory pool to use instead of the kernel heap.

Test coverage provided by existing tests for k_malloc(), which is
now derived from this API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-16 17:32:59 -07:00
Ramakrishna Pallala
149a3296ab kernel: Deprecate k_call_stacks_analyze() API
Deprecated k_call_stacks_analyze() API as it is only
dumping (printing) the statically defined main, idle,
work and ISR stacks.

Use k_thread_foreach() API which is a generic API
to iterate over threads.

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Ramakrishna Pallala
110b8e42ff kernel: Add k_thread_foreach API
Add k_thread_foreach API to iterate over all the threads in
the system.

This API can be used for debugging threads in multi threaded
environment to dump and analyze various thread parameters like
priority, state, stack address etc...

Signed-off-by: Ramakrishna Pallala <ramakrishna.pallala@intel.com>
2018-05-15 13:43:00 +03:00
Andy Ross
15c400774e kernel: Rework SMP irq_lock() compatibility layer
This was wrong in two ways, one subtle and one awful.

The subtle problem was that the IRQ lock isn't actually globally
recursive, it gets reset when you context switch (i.e. a _Swap()
implicitly releases and reacquires it).  So the recursive count I was
keeping needs to be per-thread or else we risk deadlock any time we
swap away from a thread holding the lock.

And because part of my brain apparently knew this, there was an
"optimization" in the code that tested the current count vs. zero
outside the lock, on the argument that if it was non-zero we must
already hold the lock.  Which would be true of a per-thread counter,
but NOT a global one: the other CPU may be holding that lock, and this
test will tell you *you* do.  The upshot is that a recursive
irq_lock() would almost always SUCCEED INCORRECTLY when there was lock
contention.  That this didn't break more things is amazing to me.

The rework is actually simpler than the original, thankfully.  Though
there are some further subtleties:

* The lock state implied by irq_lock() allows the lock to be
  implicitly released on context switch (i.e. you can _Swap() with the
  lock held at a recursion level higher than 1, which needs to allow
  other processes to run).  So return paths into threads from _Swap()
  and interrupt/exception exit need to check and restore the global
  lock state, spinning as needed.

* The idle loop design specifies a k_cpu_idle() function that is on
  common architectures expected to enable interrupts (for obvious
  reasons), but there is no place to put non-arch code to wire it into
  the global lock accounting.  So on SMP, even CPU0 needs to use the
  "dumb" spinning idle loop.

Finally this patch contains a simple bugfix too, found by inspection:
the interrupt return code used when CONFIG_SWITCH is enabled wasn't
correctly setting the active flag on the threads, opening up the
potential for a race that might result in a thread being scheduled on
two CPUs simultaneously.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-02 10:00:17 -07:00
Rajavardhan Gundi
68040c8d78 kernel: sem: Modify the way BUILD_ASSERT is used
BUILD_ASSERT() macro makes use of __COUNTER__ which may not be
supported in some compilers (like xcc). So, multiple uses of
BUILD_ASSERT() in same scope is not possible for such compilers.
Instead, the expression to BUILD_ASSERT can be "&&"ed to achieve
the same purpose.

Signed-off-by: Rajavardhan Gundi <rajavardhan.gundi@intel.com>
2018-04-30 16:46:14 -04:00
Leandro Pereira
c200367b68 drivers: Perform a runtime check if a driver is capable of an operation
Driver APIs might not implement all operations, making it possible for
a user thread to get the kernel to execute a function at 0x00000000.

Perform runtime checks in all the driver handlers, checking if they're
capable of performing the requested operation.

Fixes #6907.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-26 02:57:12 +05:30
Andrew Boie
31bdfc014e userspace: add support for dynamic kernel objects
A red-black tree is maintained containing the metadata for all
dynamically created kernel objects, which are allocated out of the
system heap.

Currently, k_object_alloc() and k_object_free() are supervisor-only.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-04-24 12:27:54 -07:00
Leandro Pereira
f5f95ee3a9 kernel: sem: Ensure that initial count is lesser or equal than limit
Ensure this value during static initialization (with build assertions),
and dynamic initializations through system calls.

If initial count is larger than the limit, it's possible for the count
to wraparound, causing locking issues.

Expanding the BUILD_ASSERT() macros after declaring a k_sem struct in
K_SEM_DEFINE() is necessary to support cases where a semaphore is
defined statically.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-24 04:04:36 +05:30
Andy Ross
3f55dafebc kernel: Deprecate k_thread_cancel() API
The only difference between this call and k_thread_abort() (beyond
some minor performance deltas) is that "cancel" will act as a noop in
cases where the thread has begun execution and will return an error.
"Abort" always succeeds, of course.  That is inherently racy when used
as a "stop the thread" API: there's no way in general (or at all in
SMP situations) to know that you're calling this function "early
enough" to catch the thread before it starts.

Effectively, all k_thread_cancel() gives you that k_thread_abort()
doesn't is an indication about whether or not a thread has started.
There are many other ways to get that information that don't require
dangerous kernel APIs.

Deprecate this function.  Zephyr's own code never used it except for
its own unit test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-04-24 03:57:20 +05:30