Commit graph

41120 commits

Author SHA1 Message Date
Andy Ross
1202810119 kernel/sched: _thread_priority_set needs to be sched_lock aware
This API doesn't use the normal thread priority comparison itself, so
doesn't get the magic that thread_base.prio provides.  If called when
another thread should be run, this would preempt the current thread
always, even if the scheduler lock was taken.

That was benign until recent spinlockifiation exposed it: a mutex in
the philosophers test run in preempt_only mode would swap away while
holding a spinlock (which used to work with irq locks) and fail later
with a "recursive" spinlock assert.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
d653e6868e tests/kernel/schedule_api: Bump stack size and unify stacks
The new spinlock validation features combined with spinlockification
have increased stack usage a bit in CONFIG_ASSERT builds, but this is
a good feature we want to keep.  This test was bumping into limits, so
increase the size from 512 to 640 bytes.

Unfortunately, this is also a huge test that creates a LOT of those
stacks across different test cases, so that minor bump blows us past
the 64k SRAM limit on a bunch of boards.  So unify all those stacks
that are only ever used in one case at a time so the memory can be
shared.  Now there's one fixed stack, named "tstack", and one array
"tstacks".  Much smaller.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
8a3d57b6cc kernel/userspace: Spinlockification
This port is a little different.  Most subsystem synchronization uses
simple critical sections that can be replaced with global or
per-object spinlocks.  But the userspace code was heavily exploiting
the fact that irq_lock was recursive and could be taken at any time.
So outer functions were doing locking and then calling into inner
helpers that would take their own lock (because they were called from
other contexts that did not lock).

Rather than try to rework this right now, this just creates a set of
spinlocks corresponding to the recursive states in which they are
taken, to preserve the existing semantics exactly.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
b29fb220b1 kernel/timer: Spinlockify
Simple global lock around the timer API.  Actually a lot of this usage
was using needless vestigial locking around existing scheduler and
timeout APIs that are now internally synchronized.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f582b55dd6 kernel/pipe: Spinlockify
One spinlock per pipe object.  Also removed some vestigial locking
around _ready_thread().  That call is internally synchronized now.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
d27d4e6af2 kernel/sched: Remove remaining irq_lock use
The k_sleep() locking was actually to protect the _current state from
preemption before the context switch, so document that and replace
with a spinlock.  Should probably unify this with the rather cleaner
logic in pend_curr(), but right now "sleeping" and "pended" are
needlessly distinct states.

And we can remove the locking entirely from k_wakeup().  There's no
reason for any of that to need to be synchronized.  Even if we're
racing with other thread modifiations, the state on exit will be a
runnable thread without a timeout, or whatever timeout/pend state the
other side was requesting (i.e. it's a bug, but not one solved by
synhronization).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
be03dbd4c7 kernel/msg_q: Spinlockify
One lock per msgq.  Straightforward synchronization.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f0933d0ded kernel/stack: Spinlockify
One lock per stack.  Straightforward synchronization.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
9eeb6b8779 kernel/mbox: Spinlockify
Straightforward per-struct-k_mbox lock.  Nothing changes in locking
strategy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
7df0216d1e kernel/mutex: Spinlockify
Use a subsystem lock, not a per-object lock.  Really we want to lock
at mutex granularity where possible, but (1) that has non-trivial
memory overhead vs. e.g. directly spinning on the mutex state and (2)
the locking in a few places was originally designed to protect access
to the mutex *owner* priority, which is not 1:1 with a single mutex.

Basically the priority-inheriting mutex code will need some rework
before it works as a fine-grained locking abstraction in SMP.

Note that this fixes an invisible bug: with the older code,
k_mutex_unlock() would actually call irq_unlock() twice along the path
where there was a new owner, which is benign on existing architectures
(so long as the key argument is unchanged) but was never guaranteed to
work.  With a spinlock, unlocking an unlocked/unowned lock is a
detectable assertion condition.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
603ea42764 kernel/queue: Spinlockify
Straightforward port.  Each struct k_queue object gets a spinlock to
control obvious data ownership.

Note that this port actually discovered a preexisting bug: the -ENOMEM
case in queue_insert() was failing to release the lock.  But because
the tests that hit that path didn't rely on other threads being
scheduled, they ran to successful completion even with interrupts
disabled.  The spinlock API detects that as a recursive lock when
asserts are enabled.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f6521a360d kernel/thread_abort: Remove needless locking
The two APIs protected by this lock are themselves internally
synchronized.  Replace the irq_lock with a spinlock anyway, because
what I think it's doing is trying to prevent a race where something
else like an ISR or something it wakes up mucks with the thread before
this completes.  Seems fragile on SMP as it stands, but this preserves
behavior on uniprocessor architectures.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
c0bdcbaaf8 kernel/mem_slab: Spinlockify
Use a subsystem lock instead of a per-slab lock for now

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
e456d0f7dd kernel/thread: Spinlockify
Straightforward spinlock around the global thread state.  Two changes
to the locking strategy were needed:

1. There was a needless recursive lock taken in schedule_new_thread().
This is only ever invoked in circumstances where the lock was already
held, or where there is no need for internal synchronization.

2. The recursive irq_lock() around the loop that spawns the initial
static threads (which happens at the start of main thread execution)
was removed.  Most of the job (i.e. making sure the threads don't run
before the loop is finished) was already duplicated by the sched_lock
it was already taking, and the attempt to promise that all the
timeouts happen on the same tick is already true by construction at
system startup on uniprocessor systems, and not possible to guarantee
at all under SMP (where other CPUs can take that timer interrupt).  We
don't document or test for this feature, so don't try to be fancy.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
84b47a9290 kernel/mempool: Spinlockify
Really the locking in this file is vestigial.  It only exists because
the scheduler's _unpend_all() call to wake up everyone waiting on a
wait_q is unsynchronized, because it was written to assume
irq_lock-style-locking.  It would be cleaner to put that locking into
the wait_q itself and/or use the scheduler's subsystem lock.  But it's
not clear there's any performance benefit, so let's stick with the
more easily verifiable change first.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
f2b1a4bb64 kernel/poll: Spinlockify
Poll gets a single subsystem lock for now.  The existing locking in
Ben's code is subtle, being used both for latency control and for
critical section protection.  So getting each k_poll_event to use a
separate lock will require care and a little logic change.  Do the
simple version for now, which still works to decouple it from the
global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
1bf9bd04b1 kernel: Add _unlocked() variant to context switch primitives
These functions, for good design reason, take a locking key to
atomically release along with the context swtich.  But there's still a
common pattern in code to do a switch unconditionally by passing
irq_lock() directly.  On SMP that's a little hurtful as it spams the
global lock.  Provide an _unlocked() variant for
_Swap/_reschedule/_pend_curr for simplicity and efficiency.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
da37a53a54 kernel/k_sem: Spinlockify
Switch semaphores to use a subsystem spinlock instead of the system
irqlock.

Note that this is only "half way there".  Semaphores will no longer
contend with other irqlock users on SMP systems, but all semaphores
are still sharing the same lock.  Really we want semaphores to be
independently synchronized, but adding 4 bytes to every one (there are
a LOT of these things) for a separate spinlock is too much to pay.

Rather, a proper SMP-aware implementation would spin on the count
variable directly.  But let's not rock that boat quite yet.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
ec554f44d9 kernel: Split reschdule & pend into irq/spin lock versions
Just like with _Swap(), we need two variants of these utilities which
can atomically release a lock and context switch.  The naming shifts
(for byte count reasons) to _reschedule/_pend_curr, and both have an
_irqlock variant which takes the traditional locking.

Just refactoring.  No logic changes.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
fb505b3cfd spinlock: Support ztest mocking
Spinlocks are written above the arch-provided _arch_irq_un/lock()
calls.  But those aren't stubbed by the mocking layer, and as it's not
an "arch" I don't see an obvious place to put them.  Handle them in
spinlock.h.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
04382b9a2a kernel/mem_domain: Spinlockify
Simple locking requirements here mean we can just use a single
subsystem lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
32a29d2805 kernel/atomic_c: Spinlockify
Mostly useless patch.  All architectures have their own code for
atomic operations and don't use this fallback.  Still, it's a trivial
locking setup and we might as well.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
a37a981b21 kernel/work_q: Spinlockify
Each work_q object gets a separate spinlock to synchronize access
instead of the global lock.  Note that there was a recursive lock
condition in k_delayed_work_cancel(), so that's been split out into an
internal unlocked version and the API entry point that wraps it with a
lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
5aa7460e5c kernel/spinlock: Move validation out of header inlines
The validation checking recently added to spinlocks is useful, but
requires kernel-internals like _current and _current_cpu in a header
context that tends to be needed before those are declared (or where we
don't want them declared), and is causing big header dependency
headaches.

Move it to C code, it's just a validation tool, not a performance
thing.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
53cae5f471 kernel: Use _reschedule() instead of _Swap() where possible
These two spots were duplicating logic that is already done inside
_reschedule(), which is the cleaner, less dangerous API.  Use it where
possible when outside the scheduler internals.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
dc0713a706 kernel: Cleanup. Remove redundant test when calling _Swap()
_Swap() must already handle the case where _get_next_ready_thread() is
the same as _current.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andrei Laperie
552a03b48f doc: Documenting enablement of UART1 support for nrf52840_pca10056
Adding a set of BKMs on how to enable and configure UART1
for the nrf52840_pca10056 board. This instructions are likely valid
to most other nrf52840- family of boards.

Signed-off-by: Andrei Laperie <andrei.laperie@intel.com>
2019-02-08 14:48:48 -05:00
Kumar Gala
f2ef52f122 kconfig: kconfigfunctions: update dt_str_val
Clarify the docs for dt_str_val that if the name isn't found we return
and empty string.  Also cleanup the code slightly as we don't need to
escape the double quote.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-08 12:02:18 -06:00
Kumar Gala
ff70b3444f dts: Convert CONFIG_ to DT_ symbols for chosen props
Replace generating CONFIG_ symbols with DT_ symbols for chosen
properties like 'zephyr,console' or 'zephyr,bt-mon-uart'.  We now use a
kconfigfunctions (dt_str_val) to extract the info from dts into Kconfig.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-08 11:41:26 -06:00
Paul Sokolovsky
55ba23ed03 tests: posix: fs: Decrease ramdisk size, increase stack size
This test seems to be fine in the current master, but doing
development, it easily starts to overflow RAM of some boards and/or
crash due to stack checks. So, decrease the ramdisk (the biggest
eater of RAM here) from default 96K to 80K, and bump main stack
size proactively.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2019-02-08 12:07:53 -05:00
Paul Sokolovsky
4fd593068e sub-sys: disk: ram: Make RAM disk size be configurable
Hardcoded 96KB starts to overload RAM regions and fail CI tests.
Quick test shows that 80KB ramdisk is ok, (passes tests/posix/fs).
And of course, targets with wealth of RAM may want to use bigger
ramdisks.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2019-02-08 12:07:53 -05:00
Jukka Rissanen
62c75107a3 samples: net: can: Add socket CAN sample
Simple socket CAN application that sends data periodically and
receives the data back.

Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
2019-02-08 12:03:34 -05:00
Jukka Rissanen
fc9e414ebf drivers: can: stm32: Add socket CAN support
Add support for socket CAN functionality. This means that user
is able to use BSD socket interface to send and receive CAN
packets.

Signed-off-by: Jukka Rissanen <jukka.rissanen@linux.intel.com>
2019-02-08 12:03:34 -05:00
Carlos Stuart
e6a3c31790 doc: cmsis_rtos_v2: Updated documentation
Updated documentation to reflect that osThreadJoin and osThreadDetach
are now supported.

Signed-off-by: Carlos Stuart <carlosstuart1970@gmail.com>
2019-02-08 11:59:38 -05:00
Carlos Stuart
ae07bd4725 tests: cmsis_rtos_v2: Join and detach tests
Implemented tests for the new join and detach features.

 - The first tests multiple join operations.
 - The second tests trying to join a detached thread.
 - The third tests abandoning a join when the thread is detached after
creation.

Signed-off-by: Carlos Stuart <carlosstuart1970@gmail.com>
2019-02-08 11:59:38 -05:00
Carlos Stuart
f5f450eeee lib: cmsis_rtos_v2: Join and detach support
Implements osThreadJoin and osThreadDetach.

This implementation uses a semaphore to signal when a thread is
exiting so any join operations are signalled to continue. It supports
multiple join operations on a single thread, and ensures joins are
aborted if a thread is detached.

Signed-off-by: Carlos Stuart <carlosstuart1970@gmail.com>
2019-02-08 11:59:38 -05:00
Kumar Gala
774a6c31b3 dts: bindings: ccm: Drop base_label setting
We don't need base_label set as we don't use the defines it generates
for CCM.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-08 10:29:57 -06:00
Kumar Gala
2579adea1e kconfig: kconfigfunctions: Add dt_str_val function
Add dt_str_val to extract a string from the dt conf database.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-08 10:29:57 -06:00
Kumar Gala
bfaaa6bbe9 dts: Convert CONFIG_CCM to DT_CCM
Since we know do DTS before Kconfig we should try and remove dts from
creating Kconfig namespaced symbols and leave that to Kconfig.  So
rename CONFIG_CCM_<FOO> to DT_CCM_<FOO>.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-08 10:29:57 -06:00
Andrei Emeltchenko
d402d03555 usb: msc: Fix redeclaration of enumerators
Add prefixes to MSC enumerators, otherwise they are (ERROR) are
conflicting with other enumerators.

...
subsys/usb/class/mass_storage.c:149:2: error: redeclaration of
enumerator ‘ERROR’
 ERROR,        /* error */
 ^~~~~
...
ext/hal/st/stm32cube/stm32f4xx/soc/stm32f4xx.h:216:3: note: previous
 definition of ‘ERROR’ was here
   ERROR = 0U,
   ^~~~~
...

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
873ae0b14b usb: cdc_acm: Refactor Kconfig for CDC ACM
Remove unneeded "depends on" and use "if USB_CDC_ACM" instead of.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
1e9235259a samples: cdc_acm_composite: Add README documentation
Add README describing the sample.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
d1f84ea781 samples: cdc_acm: Add composite CDC ACM sample
Add sample creating 2 serial USB ports and establishing communication
between those 2 ports.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
d083067424 usb: cdc_acm: Use new device data interface
Use device data interface to handle device data.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
00339509b3 usb: hid: Use new device data interface
Use new interface for getting device data.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
9d85b2add0 usb: Add helpers for getting common device data
Add helpers to be used in USB classes for getting device data.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
495426e4d9 usb: hid: Add sys_le16_to_cpu() conversion
Add conversion since interface number is stored in lower byte of
wIndex.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
5863680b2c usb: hid: Add get_dev_data_by_cfg helper
Reduce time by using the helper.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00
Andrei Emeltchenko
81f06f6117 usb: cdc_acm: Use u8_t for interface number
Change argument for get_dev_data_by_iface().

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2019-02-08 11:23:04 -05:00