The scheduler needs a few tweaks to work in SMP mode:
1. The "cache" field just doesn't work. With more than one CPU,
caching the highest priority thread isn't useful as you may need N
of them at any given time before another thread is returned to the
scheduler. You could recalculate it at every change, but that
provides no performance benefit. Remove.
2. The "bitmask" designed to prevent the need to individually check
priorities is likewise dropped. This could work, but in fact on
our only current SMP system and with current K_NUM_PRIOPRITIES
values it provides no real benefit.
3. The individual threads now have a "current cpu" and "active" flag
so that the choice of the next thread to run can correctly skip
threads that are active on other CPUs.
The upshot is that a decent amount of code gets #if'd out, and the new
SMP implementations for _get_highest_ready_prio() and
_get_next_ready_thread() are simpler and smaller, at the expense of
having to drop older optimizations.
Note that scheduler synchronization is unchanged: all scheduler APIs
used to require that an irq_lock() be held, which means that they now
require the global spinlock via the same API. This should be a very
early candidate for lock granularity attention!
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When in SMP mode, the nested/irq_stack/current fields are specific to
the current CPU and not to the kernel as a whole, so we need an array
of these. Place them in a _cpu_t struct and implement a
_arch_curr_cpu() function to retrieve the pointer.
When not in SMP mode, the first CPU's fields are defined as a unioned
with the first _cpu_t record. This permits compatibility with legacy
assembly on other platforms. Long term, all users, including
uniprocessor architectures, should be updated to use the new scheme.
Fundamentally this is just renaming: the structure layout and runtime
code do not change on any existing platforms and won't until someone
defines a second CPU.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
When using _arch_switch() context switching, the thread return value
is a generic hook and not provided by the architecture.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
K_NUM_PRIORITIES and K_NUM_PRIO_BITMAPS were defined in
nano_internal.h, but used in only a handful of places. Move to
kernel_structs.h (somewhat higher up in the hierarchy) to help with
include file cycle-breaking. Arguably they are a better fit there
anyway.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Kernel object metadata had an extra data field added recently to
store bounds for stack objects. Use this data field to assign
IDs to thread objects at build time. This has numerous advantages:
* Threads can be granted permissions on kernel objects before the
thread is initialized. Previously, it was necessary to call
k_thread_create() with a K_FOREVER delay, assign permissions, then
start the thread. Permissions are still completely cleared when
a thread exits.
* No need for runtime logic to manage thread IDs
* Build error if CONFIG_MAX_THREAD_BYTES is set too low
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
It's currently too easy to run out of thread IDs as they
are never re-used on thread exit.
Now the kernel maintains a bitfield of in-use thread IDs,
updated on thread creation and termination. When a thread
exits, the permission bitfield for all kernel objects is
updated to revoke access for that retired thread ID, so that
a new thread re-using that ID will not gain access to objects
that it should not have.
Because of these runtime updates, setting the permission
bitmap for an object to all ones for a "public" object doesn't
work properly any more; a flag is now set for this instead.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Add the following application-facing memory domain APIs:
k_mem_domain_init() - to initialize a memory domain
k_mem_domain_destroy() - to destroy a memory domain
k_mem_domain_add_partition() - to add a partition into a domain
k_mem_domain_remove_partition() - to remove a partition from a domain
k_mem_domain_add_thread() - to add a thread into a domain
k_mem_domain_remove_thread() - to remove a thread from a domain
A memory domain would contain some number of memory partitions.
A memory partition is a memory region (might be RAM, peripheral
registers, flash...) with specific attributes (access permission,
e.g. privileged read/write, unprivileged read-only, execute never...).
Memory partitions would be defined by set of MPU regions or MMU tables
underneath.
A thread could only belong to a single memory domain any point in time
but a memory domain could contain multiple threads.
Threads in the same memory domain would have the same access permission
to the memory partitions belong to the memory domain.
The memory domain APIs are used by unprivileged threads to share data
to the threads in the same memory and protect sensitive data from
threads outside their domain. It is not only for improving the security
but also useful for debugging (unexpected access would cause exception).
Jira: ZEP-2281
Signed-off-by: Chunlin Han <chunlin.han@linaro.org>
This places a sentinel value at the lowest 4 bytes of a stack
memory region and checks it at various intervals, including when
servicing interrupts or context switching.
This is implemented on all arches except ARC, which supports stack
bounds checking directly in hardware.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.
This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.
By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.
Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Historically, space for struct k_thread was always carved out of the
thread's stack region. However, we want more control on where this data
will reside; in memory protection scenarios the stack may only be used
for actual stack data and nothing else.
On some platforms (particularly ARM), including kernel_arch_data.h from
the toplevel kernel.h exposes intractable circular dependency issues.
We create a new per-arch header "kernel_arch_thread.h" with very limited
scope; it only defines the three data structures necessary to instantiate
the arch-specific bits of a struct k_thread.
Change-Id: I3a55b4ed4270512e58cf671f327bb033ad7f4a4f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This patck adds the stack information into the k_thread data structure.
The information will be set by when creating a new thread (_new_thread)
and will be used by the scheduling process.
Change-Id: Ibe79fe92a9ef8bce27bf8616d8e0c878508c267d
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@linaro.org>
We do the same thing on all arch's right now for thread_monitor_init so
lets put it in a common place. This also should fix an issue on xtensa
when thread monitor can be enabled (reference to _nanokernel.threads).
Change-Id: If2f26c1578aa1f18565a530de4880ae7bd5a0da2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
We do a bit of the same stuff on all the arch's to setup a new thread.
So lets put that code in a common place so we unify it for everyone and
reduce some duplicated code.
Change-Id: Ic04121bfd6846aece16aa7ffd4382bdcdb6136e3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types. This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies. We also convert the PRI printf formatters in the arch
code over to normal formatters.
Jira: ZEP-2051
Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This has not bitten us yet, but it was a ticking timebomb.
This is similar to the issue that was found with irq_lock/irq_unlock
implementations on several architectures. Having a volatile variable is
not the way to force the sched_lock variable to be
incremented/decremented around the accesses to data it protects.
Instead, a compiler barrier must prevent the compiler from reordering
the memory accesses around setting of sched_lock. Needed in the inline
implementations _sched_lock()/_sched_unlock_no_reschedule(), which
resolve to simple decrement/increment of the per-thread sched_lock
variable.
Change-Id: I06f5b3524889f193efe69caa947118404b1be0b5
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
The K_<thread option> flags/options avaialble to users were hidden in
the kernel private header files: move them to include/kernel.h to
publicize them.
Also, to avoid any future confusion, rename the k_thread.execution_flags
field to user_options.
Change-Id: I65a6fd5e9e78d4ccf783f3304b607a1e6956aeac
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
The execution_flags will store the user-facing states of a thread.
This also fixes a bug where K_ESSENTIAL was already assigned to
execution_flags via the options field of
k_thread_spawn()/K_THREAD_DEFINE().
Change-Id: I91ad7a62b5d180e09eead8985ff519809959ecf2
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.
Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Unused.
Reuse bit for K_FP_REGS to keep the used bits the lowest possible.
Change-Id: I5998801ef34156271d4f66d1948a05e0b2ce58f7
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
This will be needed for some thread user options that will move to
kernel.h since they are part of the user API.
Change-Id: I46e302b6cafcdddbad3458134b98feb5b8d45d9b
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
When prio and sched_locked were moved into a struct together to create a
union with the combined preempt field, the volatile qualifier moved from
sched_locked to prio by mistake.
Change-Id: I5a8e01324f14e77e3d7162c12515471826023633
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.
Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.
Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file. Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.
Jira: ZEP-1457
Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
These two fields in the thread structure control the preemptibility of a
thread.
sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.
By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.
Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.
- prio can fit in one byte, limiting the priorities range to -128 to 127
- recursive scheduler locking can be limited to 255; a rollover results
most probably from a logic error
- flags are split into execution flags and thread states; 8 bits is
enough for each of them currently, with at worst two states and four
flags to spare (on x86, on other archs, there are six flags to spare)
Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.
Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Remove legacy option and use SYS_CLOCK_EXISTS where appropriate.
Change-Id: I3d524ea2776e638683f0196c0cc342359d5d810f
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Use least significant bits for common flags and high bits for
arch-specific ones.
Change-Id: I982719de4a24d3588c19a0d30bbe7a27d9a99f13
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Not needed, since only the thread itself can modifiy its own
sched_locked count.
Change-Id: I3d3d8be548d2b24ca14f51637cc58bda66f8b9ee
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.
Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Renamed from kernel/unified/include/kernel_structs.h (Browse further)