Commit graph

1036 commits

Author SHA1 Message Date
Benjamin Walsh
a8978aba8f kernel: rename thread states symbols
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.

Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
David B. Kinder
ac74d8b652 license: Replace Apache boilerplate with SPDX tag
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.

Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.

Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file.  Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.

Jira: ZEP-1457

Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-19 03:50:58 +00:00
Benjamin Walsh
168695c7ef kernel/arch: inspect prio/sched_locked together for preemptibility
These two fields in the thread structure control the preemptibility of a
thread.

sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.

By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.

Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:25 +00:00
Benjamin Walsh
f955476559 kernel/arch: optimize memory use of some thread fields
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.

- prio can fit in one byte, limiting the priorities range to -128 to 127

- recursive scheduler locking can be limited to 255; a rollover results
  most probably from a logic error

- flags are split into execution flags and thread states; 8 bits is
  enough for each of them currently, with at worst two states and four
  flags to spare (on x86, on other archs, there are six flags to spare)

Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.

Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:24 +00:00
Benjamin Walsh
e6a69cae54 kernel/arch: reverse polarity on sched_locked
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.

Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Anas Nashif
fad7e2dd8d logging: move event_logger to subsys/logging
Jira: ZEP-1337
Change-Id: If1690e19a882cf53caaa3418ccabeb49c783f63d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-25 14:34:43 -05:00
Anas Nashif
c1347b4730 kernel: replace all remaining nanokernel occurances
replace include <nanokernel.h> with <kernel.h> everywhere and also fix
any remaining mentions of nanokernel.

Keep the legacy samples/tests as is.

Change-Id: Iac48447bd191e83f21a719c69dc26233216d08dc
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-25 14:34:43 -05:00
Benjamin Walsh
bfa5653e9a arch: remove instances of fiberRtnValueSet()
Obsolete, replaced by _set_thread_return_value().

Change-Id: I23e9cfc07e43542f0965817edc3552d456fd2ef3
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:08 +00:00
Anas Nashif
87133d5def debug: gdb: move to new kernel APIs
Change-Id: Ifed1fe7c60fa150ee3ef4fefabafeb95312bf8bc
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif
d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif
0d775bcd9c x86: remove obsolete comment about tasks/fibers
Change-Id: Iff911329f5c981d0d47880924e8a4d52478423fd
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:42 +00:00
Anas Nashif
cb888e6805 kernel: remove nano/micro wording and usage
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.

Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:03 +00:00
Benjamin Walsh
48db0b3443 arch/all: simpler _SysFatalErrorHandler()
- does not pull in printk(), for potential footprint gain
- does not pull in k_thread_abort(), for single-threaded systems

Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Ibc6a198b81a6cd73117d1e85aa05b92a4501a34d
2016-12-15 16:17:39 -05:00
Benjamin Walsh
8e4a534ea1 kernel: enable and optimize coop-only configurations
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.

Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh
c3a2bbba16 kernel: add k_cpu_idle/k_cpu_atomic_idle()
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.

The kernel internals now make use of these APIs instead of the legacy
ones.

Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh
88b3691415 kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.

However, this caused two problems:

1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.

2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.

To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.

On the ARM FRDM K64F board, this improvement is seen:

Before:

1- Measure time to switch from ISR back to interrupted task

   switching time is 215 tcs = 1791 nsec

2- Measure time from ISR to executing a different task (rescheduled)

   switch time is 315 tcs = 2625 nsec

After:

1- Measure time to switch from ISR back to interrupted task

   switching time is 130 tcs = 1083 nsec

2- Measure time from ISR to executing a different task (rescheduled)

   switch time is 225 tcs = 1875 nsec

These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.

Fixes ZEP-1401.

Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Andrew Boie
452fd7a5c2 x86: don't set segment registers if we don't set GDT
We have no idea what's in the GDT if we don't set it ourself.

Change-Id: I3c2e406370e3ea149252c423d66c97aab95bee17
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-29 14:51:57 -08:00
Benjamin Walsh
b2974a666d kernel/arch: move common thread.flags definitions to common file
Also remove NO_METRIC, which is not referenced anywhere anymore.

Change-Id: Ieaedf075af070a13aa3d975fee9b6b332203bfec
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-26 14:04:18 +00:00
Benjamin Walsh
069fd3624e kernel: streamline initialization of _thread_base and timeouts
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.

Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-23 00:27:42 +00:00
Benjamin Walsh
8fcc7f69da kernel/arch: remove unused uk_task_ptr parameter from _new_thread()
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.

Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-23 00:23:57 +00:00
Andrew Boie
2ffa516d89 x86: set accessed bit in ROM-based GDT
Previous configuration was backwards. From the Intel manual:

"If the segment descriptors in the GDT or an LDT are placed in ROM,
the processor can enter an indefinite loop if software or the
processor attempts to update (write to) the ROM-based segment
descriptors. To prevent this problem, set the accessed bits
for all segment descriptors placed in a ROM. Also, remove
operating-system or executive code that attempts to modify
segment descriptors located in ROM."

Only by some miracle has this not been causing problems.

Change-Id: I0bb915962a1069876d2486473760112102feae7b
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-19 00:57:04 +00:00
Benjamin Walsh
669360d5ec kernel: fix thread prio and stack size types in some APIs
Prio should be an int, since values are small integers, not a fixed-size
int32_t. It aligns with the prio parameters of the other APIs.

Stack size should be size_t.

Change-Id: Id29751b86c4ad7a7c2a7ffe446c2a96ae83c77bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-18 23:08:46 +00:00
Inaky Perez-Gonzalez
11bd718733 fatal error handlers: report which thread croaked
When a thread dies, at least print the pointer to it, so we can debug
better.

Change-Id: Ief6bbc0c221e2d5271c240a4b73df16413aa5e22
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
2016-11-17 14:36:50 +00:00
Allan Stephens
c98da84e69 doc: Various corrections to doxygen info for Kernel APIs
Most kernel APIs are now ready for inclusion in the API guide.
The APIs largely follow a standard template to provide users
of the API guide with a consistent look-and-feel.

Change-Id: Ib682c31f912e19f5f6d8545d74c5f675b1741058
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-11-16 21:43:16 +00:00
Benjamin Walsh
f6ca7de09c kernel/arch: consolidate tTCS and TNANO definitions
There was a lot of duplication between architectures for the definition
of threads and the "nanokernel" guts. These have been consolidated.

Now, a common file kernel/unified/include/kernel_structs.h holds the
common definitions. Architectures provide two files to complement it:
kernel_arch_data.h and kernel_arch_func.h. The first one contains at
least the struct _thread_arch and struct _kernel_arch data structures,
as well as the struct _callee_saved and struct _caller_saved register
layouts. The second file contains anything that needs what is provided
by the common stuff in kernel_structs.h. Those two files are only meant
to be included in kernel_structs.h in very specific locations.

The thread data structure has been separated into three major parts:
common struct _thread_base and struct k_thread, and arch-specific struct
_thread_arch. The first and third ones are included in the second.

The struct s_NANO data structure has been split into two: common struct
_kernel and arch-specific struct _kernel_arch. The latter is included in
the former.

Offsets files have also changed: nano_offsets.h has been renamed
kernel_offsets.h and is still included by the arch-specific offsets.c.
Also, since the thread and kernel data structures are now made of
sub-structures, offsets have to be added to make up the full offset.
Some of these additions have been consolidated in shorter symbols,
available from kernel/unified/include/offsets_short.h, which includes an
arch-specific offsets_arch_short.h. Most of the code include
offsets_short.h now instead of offsets.h.

Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-12 07:04:52 -05:00
Ramesh Thomas
a3dc53f2a6 power_mgmt: Do not notify deep sleep if bootloader does context restore
Some bootloaders have power management support to restoer context
upon resume from deep sleep. In such cases, the OS startup code
should call the notification hook. Create Kconfig flags to configure
this option.

Jira: 1257
Change-Id: I9f40c5fa077c2f17dc8e9f11604c3ed17e549ed5
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2016-11-11 20:40:53 +00:00
Ramesh Thomas
c0cd7acf34 power_mgmt: Simplify _sys_soc_resume notification
_sys_soc_resume hook is over loaded to handle to different
scenarios. It is primarily called to notify exit of kernel idling
after PM operations. It is also used to notify exit from deep sleep.
This is very confusing and also makes the implementation of the
hook function very difficult because of very different conditions
involved in the 2 different use cases. Further, users may not require
either or both use cases depending of their custom boot flow and
power state handling. To simplify, create a separate hook for the
purpose of deep sleep exit notification. Use the existing one to
only notify kernel idling exit after PM operations.

Jira: ZEP-1256
Change-Id: I96350199a0fd37f16590c8ee5302a94a3d71b8ba
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2016-11-11 20:40:52 +00:00
Anas Nashif
7cac3b9625 arch: arc: arm: sys_thread_self_get -> k_current_get
Change-Id: Iaa01b0d8733d76888524cfd258bacbd9c11142de
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-11-10 18:52:51 +00:00
Allan Stephens
bce8fbb61e kernel: Clean up of x86 floating point code
Updates x86 floating point support to reflect changes that have
been made in recent months.

* Many, many, many cosmetic changes (mostly revisions to comments).

* Elimination of unnecessary function aliases that were needed
  to support the task and fiber versions of certain APIs.

* Elimination of run-time code to enable a thread's "FP regs"
  option bit if the "SSE regs" option bit was set. The kernel
  now recognizes that the thread is using the FPU as long as
  either option bit is set. (If the thread has both option bits
  enabled this is the same as if only the "SSE regs" bit is set.)

Change-Id: Ic12abc54b6fa78921749b546d8debf23e7ad232d
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-11-09 23:51:30 +00:00
Andrew Boie
56f561e15e arches: use new kernel APIs
Change-Id: I4b6f5264d5295ebf4278991a1f4e2141bef6602f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-09 20:49:40 +00:00
Andrew Boie
0b474eef9c kernel: deprecate old init levels
PRIMARY, SECONDARY, NANOKERNEL, MICROKERNEL init levels are now
deprecated.

New init levels introduced: PRE_KERNEL_1, PRE_KERNEL_2, POST_KERNEL
to replace them.

Most existing code has instances of PRIMARY replaced with PRE_KERNEL_1,
SECONDARY with POST_KERNEL as SECONDARY has had a longstanding bug
where the documentation specified SECONDARY ran before the kernel started
up, but actually ran afterwards.

Change-Id: I771bc634e9caf7f17dbf214a270bc9967eed7d32
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-09 17:59:44 +00:00
Benjamin Walsh
3cc2ba9f9c kernel: add __ASSERT() for thread priorities
Verify the thread priorities are within the bounds when starting a new
thread and when changing the priority of a thread.

Change-Id: I007b3b249e4b80235b6439cbee44cad2f31973bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-08 20:27:31 -05:00
Andrew Boie
ee95dd22a4 x86: remove CONFIG_NANOKERNEL references
Change-Id: I8c6ca9189dd09133162816675e33332d6e5a34b3
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-08 22:02:45 +00:00
Allan Stephens
f48f263665 kernel: Rename USE_FP and USE_SSE symbols
Symbols now use the K_ prefix which is now standard for the
unified kernel. Legacy support for these symbols is retained
to allow existing applications to build successfully.

Change-Id: I3ff12c96f729b535eecc940502892cbaa52526b6
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-11-07 18:52:31 +00:00
Anas Nashif
12ffc58d4b benchmarks: rename _NanoTscRead -> _tsc_read
Change-Id: Id5687f79ac13136f14a14d250e149436a0173f04
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-11-07 15:39:15 +00:00
Andrew Boie
6e172b8abd x86: remove legacy kernel support
Change-Id: I81111a58d1305bd521ea93adc40c66b43f20977c
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-04 11:45:13 -07:00
Allan Stephens
a3f3de3741 unified: Rename ESSENTIAL to K_ESSENTIAL
Adds standard prefix to symbolic option that flags a thread
as essential to system operation.

Change-Id: Ia904a81ce343fdd1cd44caaaeae641d822777f9b
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-11-04 00:47:08 +00:00
Allan Stephens
743bdb8143 unified: Enable handling of thread options for static threads
Change-Id: I51d2d9cfa0eeb5f974a6cf1db32406399ef57418
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-10-27 08:36:14 -05:00
Allan Stephens
2220f25f0a kernel: Standardize thread monitoring initialization
Gets rid of unnecessary THREAD_MONITOR_INIT() macro, to be
consistent with the approach taken by _thread_monitor_exit().

Aligns x86 code with the approach used on other architectures.

Revises the associated comments and removes unnecessary
doxygen tags.

Change-Id: Ied1aebcd476afb82f61862b77264efb8a7dc66c9
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-10-26 17:03:12 +00:00
Andrew Boie
26b1651f0c intstub.S: fix argument to _sys_power_save_idle_exit on IAMCU
Change-Id: I5aa1abe464ba2b8f9c36be78a95705ffcf993c7d
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-09-28 20:28:27 +00:00
Andrew Boie
70d8a32740 x86: interrupts: consolidate duplicated code in idle path
Change-Id: I16b80f363fef17d3ea99fec0ced4f49238f8e6c7
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-09-28 20:28:07 +00:00
Andrew Boie
e56f61f5aa x86: exceptions: simplify exception stubs
Exception stubs now just push the handler and in some cases a dummy
error code before jumping to the exception handling code, never to
return.

Change-Id: I6a79d9243deb3fc7ccdae003dd0917364c0aa304
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-09-28 20:28:07 +00:00
Andrew Boie
edeb1f1c52 x86: interrupts: optimize and simplify IRQ stubs
Interrupt stubs now just push the ISR and parameter onto the stack
and jump to the common interrupt code, never to return.

Change-Id: I82543d8148b5c7dfe116c43f41791f852614bb28
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-09-28 20:28:06 +00:00
Andrew Boie
2d7490c7ce x86: don't unconditionally run ISRs with interrupts enabled
Re-enabling interrupts before running the ISR must only be done
when CONFIG_NESTED_INTERRUPTS is turned on.

Change-Id: I2c04f2ce08d41cfef5553ee8554a90d1be0e86a3
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-09-26 17:53:45 +00:00
Andrew Boie
99368c7435 x86: optimize GDT space
The CPU manual indicates that 8-byte alignment is sufficient,
not sure why gdt_rom was aligned on a 16-byte boundary.

The null descriptor in the GDT is never looked at by the CPU,
save a few bytes by putting the 6-byte pseudo descriptor there.

Change-Id: I73f26cdeb30a91f8258c88ef960a45812a11d959
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-09-20 20:47:15 +00:00
Andrew Boie
757dae5b7d x86: introduce new segmentation.h header
This header has a bunch of data structure definitions and macros useful
for manipulating segment descriptors on X86. The old IDT_ENTRY defintion
is removed in favor of the new 'struct segment_descriptor' which can be
used for all segment descriptor types and not just IRQ gates.

We also add some inline helper functions for examining segment registers,
descriptor tables, and doing far jumps/calls.

Change-Id: I640879073afa9765d2a214c3fb3c3305fef94b5e
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-09-20 20:46:45 +00:00
Peter Mitsis
68d1f4b562 unified: Add timeslice support
Change-Id: I5b6c1ef5c015d1ddaea21b1c5447336b1b04db39
Signed-off-by: Peter Mitsis <peter.mitsis@windriver.com>
2016-09-20 15:28:54 +00:00
Benjamin Walsh
b32d0ff71e unified/x86: fix IAMCU build
Unified kernel does not provide the _thread_arg_t type, but instead uses
void * directly for its thread entry parameters. _thread_entry_t is
typedefed from void * anyway, and only obfuscates the type. So, define
_thread_entry_t to be a function pointer to a function with three void *
parameters, and when the unified kernel becomes the only kernel, all the
_thread_arg_t types will go away.

With this change, IAMCU runs all the tests sysV x86 is able to run as a
unified kernel.

Change-Id: I53c8754629a5a0a114a16a775ff1efc1884496ff
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-15 09:42:24 -04:00
Benjamin Walsh
983cbe398c unified/x86: add unified kernel support for x86 arch
The x86 architecture port is fitted with support for the unified kernel,
namely:

- the interrupt exit code now calls _Swap() if the current
  thread is not a coop thread and if the scheduler is not locked

- there is no 'task' fields in the _nanokernel anymore: _Swap()
  now calls _get_next_ready_thread instead

- the _nanokernel.fiber field is replaced by a more sophisticated
  ready_q, based on the microkernel's priority-bitmap-based one

- nano_private includes nano_internal.h from the unified directory

- the FIBER, TASK and PREEMPTIBLE flags do not exist anymore: the thread
  priority drives the behaviour

- the tcs uses a dlist for queuing in both ready and wait queues instead
  of a custom singly-linked list

- other new fields in the tcs include a schedule-lock count, a
  back-pointer to init data (when the task is static) and a pointer to
  swap data, needed when a thread pending on _Swap() must be passed more
  then just one value (e.g. k_stack_pop() needs an error code and data)

- fiberRtnValueSet() is aliased to _set_thread_return_value since it
  also operates on preempt threads now

- _set_thread_return_value_with_data() sets the swap_data field in
  addition to a return value from _Swap()

- convenience aliases are created for shorter names:

  - _current is defined as _nanokernel.current
  - _ready_q is defined as _nanokernel.ready_q

- _Swap() sets the threads's return code to -EAGAIN before swapping out
  to prevent timeouts to have to set it (solves hard issues in some
  kernel objects).

- Floating point support.

Note that, in _Swap(), the register holding the thread to be swapped in has
been changed from %ecx to %eax in both the legacy kernel and the unified kernel
to take advantage of the fact that the return value of _get_next_ready_thread()
is stored in %eax, and this avoids moving it to %ecx.

Work by: Dmitriy Korovkin <dmitriy.korovkin@windriver.com>
         Allan Stephens <allan.stephens@windriver.com>
	 Benjamin Walsh <benjamin.walsh@windriver.com>

Change-Id: I4ce2bd47bcdc62034c669b5e889fc0f29480c43b
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-13 17:12:55 -04:00
Benjamin Walsh
b9a0d90a5f x86: load _nanokernel in %edi in _Swap()
Loading the _nanokernel address in %edi rather than in %eax allows
calling funtions in _Swap() without having to restore it, since %eax is
used for the return value. %edi is a callee-saved register and does not
have to be restored.

Change-Id: I338086d8e15857e835d5d7487de975791926f869
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-09-13 17:12:55 -04:00