Commit graph

1001 commits

Author SHA1 Message Date
Kumar Gala
a45ea3806f x86: Rework rework x86 related code to use new DTS macros
Replace DT_PHYS_RAM_ADDR and DT_RAM_SIZE with DT_REG_ADDR/DT_REG_SIZE
for the DT_CHOSEN(zephyr_sram) node.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-04-30 08:37:18 -05:00
Stephanos Ioannidis
0e6ede8929 kconfig: Rename CONFIG_FLOAT to CONFIG_FPU
This commit renames the Kconfig `FLOAT` symbol to `FPU`, since this
symbol only indicates that the hardware Floating Point Unit (FPU) is
used and does not imply and/or indicate the general availability of
toolchain-level floating point support (i.e. this symbol is not
selected when building for an FPU-less platform that supports floating
point operations through the toolchain-provided software floating point
library).

Moreover, given that the symbol that indicates the availability of FPU
is named `CPU_HAS_FPU`, it only makes sense to use "FPU" in the name of
the symbol that enables the FPU.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-27 19:03:44 +02:00
Andrew Boie
618426d6e7 kernel: add Z_STACK_PTR_ALIGN ARCH_STACK_PTR_ALIGN
This operation is formally defined as rounding down a potential
stack pointer value to meet CPU and ABI requirments.

This was previously defined ad-hoc as STACK_ROUND_DOWN().

A new architecture constant ARCH_STACK_PTR_ALIGN is added.
Z_STACK_PTR_ALIGN() is defined in terms of it. This used to
be inconsistently specified as STACK_ALIGN or STACK_PTR_ALIGN;
in the latter case, STACK_ALIGN meant something else, typically
a required alignment for the base of a stack buffer.

STACK_ROUND_UP() only used in practice by Risc-V, delete
elsewhere.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie
1f6f977f05 kernel: centralize new thread priority check
This was being done inconsistently in arch_new_thread(), just
move to the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andrew Boie
c0df99cc77 kernel: reduce scope of z_new_thread_init()
The core kernel z_setup_new_thread() calls into arch_new_thread(),
which calls back into the core kernel via z_new_thread_init().

Move everything that doesn't have to be in z_new_thread_init() to
z_setup_new_thread() and convert to an inline function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Anas Nashif
b90fafd6a0 kernel: remove unused offload workqueue option
Those are used only in tests, so remove them from kernel Kconfig and set
them in the tests that use them directly.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-04-12 18:42:27 -04:00
Kumar Gala
55d4cd2aa8 arch: x86: Convert to new DT_INST macros
Convert older DT_INST_ macro use the new include/devicetree.h
DT_INST macro APIs.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-03-26 03:29:23 -05:00
Andrew Boie
80a0d9d16b kernel: interrupt/idle stacks/threads as array
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.

There is now a centralized declaration in kernel_internal.h.

On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.

The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.

The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 23:17:36 +02:00
Timo Teräs
6fd168e9a1 driver: uart: ns16550: convert to DT_INST_*
Change to code to use the automatically generated DT_INST_*
defines and remove the now unneeded configs and fixups.

Signed-off-by: Timo Teräs <timo.teras@iki.fi>
2020-03-14 02:22:05 +02:00
Andrew Boie
9062a5ee91 revert: "program local APIC LDR register for..."
This reverts commit 87b65c5ac2.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-19 14:40:19 -08:00
Zide Chen
87b65c5ac2 interrupt_controller: program local APIC LDR register for xAPIC
If IO APIC is in logical destination mode, local APICs compare their
logical APIC ID defined in LDR (Logical Destination Register) with
the destination code sent with the interrupt to determine whether or not
to accept the incoming interrupt.

This patch programs LDR in xAPIC mode to support IO APIC logical mode.

The local APIC ID from local APIC ID register can't be used as the
'logical APIC ID' because LAPIC ID may not be consecutive numbers hence
it makes it impossible for LDR to encode 8 IDs within 8 bits.

This patch chooses 0 for BSP, and for APs, cpu_number which is the index
to x86_cpuboot[], which ultimately assigned in z_smp_init[].

Signed-off-by: Zide Chen <zide.chen@intel.com>
2020-02-19 10:25:10 -08:00
Andrew Boie
768a30c14f x86: organize 64-bit ESF
The callee-saved registers have been separated out and will not
be saved/restored if exception debugging is shut off.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-08 08:51:43 -05:00
Andy Ross
0e32f4dab0 arch/x86_64: Save RFLAGS during arch_switch()
The context switch implementation forgot to save the current flag
state of the old thread, so on resume the flags would be restored to
whatever value they had at the last interrupt preemption or thread
initialization.  In practice this guaranteed that the interrupt enable
bit would always be wrong, becuase obviously new threads and preempted
ones have interrupts enabled, while arch_switch() is always called
with them masked.  This opened up a race between exit from
arch_switch() and the final exit path in z_swap().

The other state bits weren't relevant -- the oddball ones aren't used
by Zephyr, and as arch_switch() on this architecture is a function
call the compiler would have spilled the (caller-save) comparison
result flags anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Anas Nashif
73008b427c tracing: move headers under include/tracing
Move tracing.h to include/tracing/ to align with subsystem reorg.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-02-07 15:58:05 -05:00
Wentong Wu
aa5d45d7cc tracing: add TRACING_ISR Kconfig
Add TRACING_ISR Kconfig to help high latency backend working well.

Currently the ISR tracing hook function is put at the begining and
ending of ISR wrapper, when there is ISR needed in the tracing path
(especially tracing backend), it will cause tracing buffer easily
be exhausted if async tracing method enabled. Also it will increase
system latency if all the ISRs are traced. So add TRACING_ISR to
enable/disable ISR tracing here. Later a filter out mechanism based
on irq number will be added.

Signed-off-by: Wentong Wu <wentong.wu@intel.com>
2020-02-05 23:54:26 -05:00
Andy Ross
5b85d6da6a arch/x86_64: Poison instruction pointer of running threads
There was a bug where double-dispatch of a single thread on multiple
SMP CPUs was possible.  This can be mind-bending to diagnose, so when
CONFIG_ASSERT is enabled add an extra instruction to __resume (the
shared code path for both interupt return and context switch) that
poisons the shared RIP of the now-running thread with a recognizable
invalid value.

Now attempts to run the thread again will crash instantly with a
discoverable cookie in their instruction pointer, and this will remain
true until it gets a new RIP at the next interrupt or switch.

This is under CONFIG_ASSERT because it meets the same design goals of
"a cheap test for impossible situations", not because it's part of the
assertion framework.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Ulf Magnusson
cf89ba33ea global: Fix up leading/trailing blank lines in files
To make the updated test in
https://github.com/zephyrproject-rtos/ci-tools/pull/121 clean, though it
only checks modified files.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-01-27 17:41:55 -06:00
Andy Ross
86430d8d46 kernel: arch: Clarify output switch handle requirements in arch_switch
The original intent was that the output handle be written through the
pointer in the second argument, though not all architectures used that
scheme.  As it turns out, that write is becoming a synchronization
signal, so it's no longer optional.

Clarify the documentation in arch_switch() about this requirement, and
add an instruction to the x86_64 context switch to implement it as
original envisioned.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andrew Boie
e34f1cee06 x86: implement kernel page table isolation
Implement a set of per-cpu trampoline stacks which all
interrupts and exceptions will initially land on, and also
as an intermediate stack for privilege changes as we need
some stack space to swap page tables.

Set up the special trampoline page which contains all the
trampoline stacks, TSS, and GDT. This page needs to be
present in the user page tables or interrupts don't work.

CPU exceptions, with KPTI turned on, are treated as interrupts
and not traps so that we have IRQs locked on exception entry.

Add some additional macros for defining IDT entries.

Add special handling of locore text/rodata sections when
creating user mode page tables on x86-64.

Restore qemu_x86_64 to use KPTI, and remove restrictions on
enabling user mode on x86-64.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-17 16:17:39 -05:00
Ulf Magnusson
4e85006ba4 dts: Rename generated_dts_board*.{h,conf} to devicetree*.{h,conf}
generated_dts_board.h is pretty redundant and confusing as a name. Call
it devicetree.h instead.

dts.h would be another option, but DTS stands for "devicetree source"
and is the source code format, so it's a bit confusing too.

The replacement was done by grepping for 'generated_dts_board' and
'GENERATED_DTS_BOARD'.

Two build diagram and input-output SVG files were updated as well, along
with misc. documentation.

hal_ti, mcuboot, and ci-tools updates are included too, in the west.yml
update.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2020-01-17 17:57:59 +01:00
Andrew Boie
2690c9e550 x86: move some per-cpu initialization to C
No reason we need to stay in assembly domain once we have
GS and a stack set up.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
a594ca7c8f kernel: cleanup and formally define CPU start fn
The "key" parameter is legacy, remove it.

Add a typedef for the expected function pointer type.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
4fcf28ef25 x86: mitigate swapgs Spectre V1 attacks
See CVE-2019-1125. We mitigate this by adding an 'lfence'
upon interrupt/exception entry after the decision has been
made whether it's necessary to invoke 'swapgs' or not.

Only applies to x86_64, 32-bit doesn't use swapgs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
3d80208025 x86: implement user mode on 64-bit
- In early boot, enable the syscall instruction and set up
  necessary MSRs
- Add a hook to update page tables on context switch
- Properly initialize thread based on whether it will
  start in user or supervisor mode
- Add landing function for system calls to execute the
  desired handler
- Implement arch_user_string_nlen()
- Implement logic for dropping a thread down to user mode
- Reserve per-CPU storage space for user and privilege
  elevation stack pointers, necessary for handling syscalls
  when no free registers are available
- Proper handling of gs register considerations when
  transitioning privilege levels

Kernel page table isolation (KPTI) is not yet implemented.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
07c278382a x86: remove retpoline code
This code:

1) Doesn't work
2) Hasn't ever been enabled by default
3) We mitigate Spectre V2 via Extended IBRS anyway

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
077b587447 x86: implement hw-based oops for both variants
We use a fixed value of 32 as the way interrupts/exceptions
are setup in x86_64's locore.S do not lend themselves to
Kconfig configuration of the vector to use.

HW-based kernel oops is now permanently on, there's no reason
to make it optional that I can see.

Default vectors for IPI and irq offload adjusted to not
collide.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
708d5f7922 x86: don't use privilege stack areas as a guard
This is causing problems, as if we create a thread in
a system call we will *not* be using the kernel page
tables if CONFIG_KPTI=n.

Just don't fiddle with this page's permissions; we don't
need it as a guard area anyway since we have a stack
guard placed immediately before it, and this page
is unused if user mode isn't active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
06c4207602 x86: add CONFIG_X86_USERSPACE for Intel64
Hidden config to select dependencies.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
edc14e50ad x86: up-level speculative attack mitigations
These are now part of the common Kconfig and we
build spec_ctrl.c for all.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
c71e66e2a5 x86: add system call functions for 64-bit
Nothing too fancy here, we try as much as possible to
use the same register layout as the C calling convention.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
7f82b99ad4 x86: up-level some user mode functions
These are now common code, all are related to user mode
threads. The rat's nest of ifdefs in ia32's arch_new_thread
has been greatly simplified, there is now just one hook
if user mode is turned on.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
7ea958e0dd x86: optimize locations of psp and thread ptables
z_x86_thread_page_tables_get() now works for both user
and supervisor threads, returning the kernel page tables
in the latter case. This API has been up-leveled to
a common header.

The per-thread privilege elevation stack initial stack
pointer, and the per-thread page table locations are no
longer computed from other values, and instead are stored
in thread->arch.

A problem where the wrong page tables were dumped out
on certain kinds of page faults has been fixed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
e45c6eeebc x86: expose APIs for dumping MMU entry flags
Add two new non-static APIs for dumping out the
page table entries for a specified memory address,
and move to the main MMU code. Has debugging uses
when trying to figure out why memory domains are not
set up correctly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
7c293831c6 x86: add support for 64-bit thread ptables
Slightly different layout since the top-lebel PML4
is page-sized and must be page-aligned.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
fc589d7279 x86: implement 64-bit exception recovery logic
The esf has a different set of members on 64-bit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
ded0185eb8 x86: add GDT descriptors for user mode
These are arranged in the particular order required
by the syscall/sysret instructions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
692fda47fc x86: use MSRs for %gs
We don't need to set up GDT data descriptors for setting
%gs. Instead, we use the x86 MSRs to set GS_BASE and
KERNEL_GS_BASE.

We don't currently allow user mode to set %gs on its own,
but later on if we do, we have everything set up to issue
'swapgs' instructions on syscall or IRQ.

Unused entries in the GDT have been removed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
10d033ebf0 x86: enable recoverable exceptions on 64-bit
These were previously assumed to always be fatal.
We can't have the faulting thread's XMM registers
clobbered, so put the SIMD/FPU state onto the stack
as well. This is fairly large (512 bytes) and the
execption stack is already uncomfortably small, so
increase to 2K.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie
1b45de8010 x86: fix stack traces on x86_64
Runtime stack traces (at least as currently implemented)
don't work on x86_64 normally as RBP is treated as a general-
purpose register. Depend on CONFIG_NO_OPTIMIZATIONS to enable
this on 64-bit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-18 08:15:54 -05:00
Andrew Boie
7e014f6b30 x86: up-level QEMU arch_system_halt()
qemu_x86_64 will exit the emulator on a fatal system error,
like qemu_x86 already does.

Improves CI times when tests fail since sanitycheck will not
need to wait for the timeout to expire.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-17 18:56:54 -08:00
Andrew Boie
2b67ca8ac9 x86: improve exception debugging
We now dump more information for less common cases,
and this is now centralized code for 32-bit/64-bit.
All of this code is now correctly wrapped around
CONFIG_EXCEPTION_DEBUG. Some cruft and unused defines
removed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-17 11:39:22 -08:00
Andrew Boie
a824821b86 kernel: fix k_mem_partition data types
We need a size_t and not a u32_t for partition sizes,
for 64-bit compatibility.

Additionally, app_memdomain.h was also casting the base
address to a u32_t instead of a uintptr_t.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-12 14:48:42 -08:00
Kumar Gala
24ae1b1aa7 include: Fix use of <misc/FOO.h> -> <sys/FOO.h>
Fix #include <misc/FOO.h> as misc/FOO.h has been deprecated and
should be #include <sys/FOO.h>.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-12-10 08:39:37 -05:00
Flavio Ceolin
4afa181831 x86: fatal: Explicitely include syscall header
This file is using definitions from ia32/syscall.h, just properly
including this file.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-12-09 12:47:15 -05:00
Andrew Boie
9849c7d3e7 x86: don't use privilege stack areas as a guard
This is causing problems, as if we create a thread in
a system call we will *not* be using the kernel page
tables if CONFIG_KPTI=n, resulting in a crash when
the later call to copy_page_tables() tries to initialize
the PDPT (which is in the same page as the privilege
stack).

Just don't fiddle with this page's permissions; we don't
need it as a guard area anyway since we have a stack
guard placed immediately before it, and this page
is unused if user mode isn't active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-09 12:40:31 -05:00
Stephanos Ioannidis
9695763f5f arch: x86: Inline direct ISR functions.
This commit inlines the direct ISR functions that were previously
implemented in irq_manage.c, since the PR #20119 resolved the circular
dependency between arch.h and kernel_structs.h described in the issue
#3056.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-08 15:50:23 +01:00
Andrew Boie
4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Andrew Boie
0b27cacabd x86: up-level page fault handling
Now in common code for 32-bit and 64-bit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 11:19:38 -05:00
Andrew Boie
7d1ae023f8 x86: enable stack overflow detection on 64-bit
CONFIG_HW_STACK_PROTECTION is now available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-06 17:50:34 -08:00
Andrew Boie
9fb89ccf32 x86: consolidate some fatal error code
Some code for unwinding stacks and z_x86_fatal_error()
now in a common C file, suitable for both modes.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-06 17:50:34 -08:00
Stephanos Ioannidis
7bcfdadf81 arch: Simplify private header include path configuration.
When compiling the components under the arch directory, the compiler
include paths for arch and kernel private headers need to be specified.

This was previously done by adding 'zephyr_library_include_directories'
to CMakeLists.txt file for every component under the arch directory,
and this resulted in a significant amount of duplicate code.

This commit uses the CMake 'include_directories' command in the root
CMakeLists.txt to simplify specification of the private header include
paths for all the arch components.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Stephanos Ioannidis
2d7460482d headers: Refactor kernel and arch headers.
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.

The refactoring strategy used in this commit is detailed in the issue

This commit introduces the following major changes:

1. Establish a clear boundary between private and public headers by
  removing "kernel/include" and "arch/*/include" from the global
  include paths. Ideally, only kernel/ and arch/*/ source files should
  reference the headers in these directories. If these headers must be
  used by a component, these include paths shall be manually added to
  the CMakeLists.txt file of the component. This is intended to
  discourage applications from including private kernel and arch
  headers either knowingly and unknowingly.

  - kernel/include/ (PRIVATE)
    This directory contains the private headers that provide private
   kernel definitions which should not be visible outside the kernel
   and arch source code. All public kernel definitions must be added
   to an appropriate header located under include/.

  - arch/*/include/ (PRIVATE)
    This directory contains the private headers that provide private
   architecture-specific definitions which should not be visible
   outside the arch and kernel source code. All public architecture-
   specific definitions must be added to an appropriate header located
   under include/arch/*/.

  - include/ AND include/sys/ (PUBLIC)
    This directory contains the public headers that provide public
   kernel definitions which can be referenced by both kernel and
   application code.

  - include/arch/*/ (PUBLIC)
    This directory contains the public headers that provide public
   architecture-specific definitions which can be referenced by both
   kernel and application code.

2. Split arch_interface.h into "kernel-to-arch interface" and "public
  arch interface" divisions.

  - kernel/include/kernel_arch_interface.h
    * provides private "kernel-to-arch interface" definition.
    * includes arch/*/include/kernel_arch_func.h to ensure that the
     interface function implementations are always available.
    * includes sys/arch_interface.h so that public arch interface
     definitions are automatically included when including this file.

  - arch/*/include/kernel_arch_func.h
    * provides architecture-specific "kernel-to-arch interface"
     implementation.
    * only the functions that will be used in kernel and arch source
     files are defined here.

  - include/sys/arch_interface.h
    * provides "public arch interface" definition.
    * includes include/arch/arch_inlines.h to ensure that the
     architecture-specific public inline interface function
     implementations are always available.

  - include/arch/arch_inlines.h
    * includes architecture-specific arch_inlines.h in
     include/arch/*/arch_inline.h.

  - include/arch/*/arch_inline.h
    * provides architecture-specific "public arch interface" inline
     function implementation.
    * supersedes include/sys/arch_inline.h.

3. Refactor kernel and the existing architecture implementations.

  - Remove circular dependency of kernel and arch headers. The
   following general rules should be observed:

    * Never include any private headers from public headers
    * Never include kernel_internal.h in kernel_arch_data.h
    * Always include kernel_arch_data.h from kernel_arch_func.h
    * Never include kernel.h from kernel_struct.h either directly or
     indirectly. Only add the kernel structures that must be referenced
     from public arch headers in this file.

  - Relocate syscall_handler.h to include/ so it can be used in the
   public code. This is necessary because many user-mode public codes
   reference the functions defined in this header.

  - Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
   necessary to provide architecture-specific thread definition for
   'struct k_thread' in kernel.h.

  - Remove any private header dependencies from public headers using
   the following methods:

    * If dependency is not required, simply omit
    * If dependency is required,
      - Relocate a portion of the required dependencies from the
       private header to an appropriate public header OR
      - Relocate the required private header to make it public.

This commit supersedes #20047, addresses #19666, and fixes #3056.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Ulf Magnusson
bd6e04411e kconfig: Clean up header comments and make them consistent
Use this short header style in all Kconfig files:

    # <description>

    # <copyright>
    # <license>

    ...

Also change all <description>s from

    # Kconfig[.extension] - Foo-related options

to just

    # Foo-related options

It's clear enough that it's about Kconfig.

The <description> cleanup was done with this command, along with some
manual cleanup (big letter at the start, etc.)

    git ls-files '*Kconfig*' | \
        xargs sed -i -E '1 s/#\s*Kconfig[\w.-]*\s*-\s*/# /'

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2019-11-04 17:31:27 -05:00
Daniel Leung
b7eb04b300 x86: consolidate x86_64 architecture, SoC and boards
There are two set of code supporting x86_64: x86_64 using x32 ABI,
and x86 long mode, and this consolidates both into one x86_64
architecture and SoC supporting truly 64-bit mode.

() Removes the x86_64:x32 architecture and SoC, and replaces
   them with the existing x86 long mode arch and SoC.
() Replace qemu_x86_64 with qemu_x86_long as qemu_x86_64.
() Updates samples and tests to remove reference to
   qemu_x86_long.
() Renames CONFIG_X86_LONGMODE to CONFIG_X86_64.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-10-25 17:57:55 -04:00
Andrew Boie
e1f2292895 x86: intel64: set page tables for every CPU
The page tables to use are now stored in the cpuboot struct.
For the first CPU, we set to the flat page tables, and then
update later in z_x86_prep_c() once the runtime tables have
been generated.

For other CPUs, by the time we get to z_arch_start_cpu()
the runtime tables are ready do go, and so we just install
them directly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
f6e82ea1bd x86: generate runtime 64-bit page tables
- Bring in CONFIG_X86_MMU and some related defines to
  common X86 Kconfig
- Don't set ARCH_HAS_USERSPACE for intel64 yet when
  X86_MMU is enabled
- Uplevel x86_mmu.c to common code
- Add logic for handling PML4 table and generating PDPTs
- move z_x86_paging_init() to common kernel_arch_func.h
- Uplevel inclusion of mmustructs.h to common x86 arch.h,
  both need it for memory domain defines

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
4c0d044863 x86: mmustructs: use Z_STRUCT_SECTION_ITERABLE()
This does the right thing for arches with 8-byte words.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
46540f4bb1 intel64: don't set global flag in ptables
May cause issues when runtime page tables are installed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
cdd721db3b locore: organize data by type
Program text, rodata, and data need different MMU
permissions. Split out rodata and data from the program
text, updating the linker script appropriately.

Region size symbols added to the linker script, so these
can later be used with MMU_BOOT_REGION().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie
55cb961878 x86: arm: rename some functions
z_arch_ is only for those APIs in the arch interface.
Other stuff needs to be renamed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-21 10:13:38 -07:00
Andrew Boie
979b17f243 kernel: activate arch interface headers
Duplicate definitions elsewhere have been removed.

A couple functions which are defined by the arch interface
to be non-inline, but were implemented inline by native_posix
and intel64, have been moved to non-inline.

Some missing conditional compilation for z_arch_irq_offload()
has been fixed, as this is an optional feature.

Some massaging of native_posix headers to get everything
in the right scope.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-21 10:13:38 -07:00
Andy Ross
8bc3b6f673 arch/x86/intel64: Fix assumption with dummy threads
The intel64 switch implementation doesn't actually use a switch handle
per se, just the raw thread struct pointers which get stored into the
handle field.  This works fine for normally initialized threads, but
when switching out of a dummy thread at initialization, nothing has
initialized that field and the code was dumping registers into the
bottom of memory through the resulting NULL pointer.

Fix this by skipping the load of the field value and just using an
offset instead to get the struct address, which is actually slightly
faster anyway (a SUB immediate instruction vs. the load).

Actually for extra credit we could even move the switch_handle field
to the top of the thread struct and eliminate the instruction
entirely, though if we did that it's probably worth adding some
conditional code to make the switch_handle field disappear entirely.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-10-19 12:09:32 -07:00
Andrew Boie
9c76d28bee x86: intel64: fatal: minor formatting improvements
Line up everything nicely, add leading '0x' to hex
addresses, and remove redundant newlines. Add
whitespace between the register name and contents
so the contents can be easily selected from a terminal.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-15 09:00:49 -07:00
Andrew Boie
fdd3ba896a x86: intel64: set the WP bit for paging
Otherwise, supervisor mode can write to read-only areas,
failing tests that check this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-15 09:00:49 -07:00
Andrew Boie
31620b90e2 x86: refactor mmustructs.h
The struct definitions for pdpt, pd, and pt entries has been
removed:

 - Bitfield ordering in a struct is implementation dependent,
   it can be right-to-left or left-to-right
 - The two different structures for page directory entries were
   not being used consistently, or when the type of the PDE
   was unknown
 - Anonymous structs/unions are GCC extensions

Instead these are now u64_t, with bitwise operations used to
get/set fields.

A new set of inline functions for fetcing various page table
structures has been implemented, replacing the older macros.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-14 11:49:39 -07:00
Andrew Boie
ab4d647e6d x86: mmu: get rid of x86_page_entry_data_t typedef
This hasn't been necessary since we dropped support for 32-bit
non-PAE page tables. Replace it with u64_t and scrub any
unnecessary casts left behind.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-14 11:49:39 -07:00
Andrew Boie
e3ab43580c x86: move mmustructs.h
This will be used for both 32-bit and 64-bit mode.
This header gets pulled in by x86's arch/cpu.h, so put
it in include/arch/x86/.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-14 11:49:39 -07:00
Andrew Boie
8ffff144ea kernel: add architecture interface headers
include/sys/arch_inlines.h will contain all architecture APIs
that are used by public inline functions and macros,
with implementations deriving from include/arch/cpu.h.

kernel/include/arch_interface.h will contain everything
else, with implementations deriving from
arch/*/include/kernel_arch_func.h.

Instances of duplicate documentation for these APIs have been
removed; implementation details have been left in place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-11 13:30:46 -07:00
Andrew Boie
e340f8d22e x86: intel64: enable no-execute
Set the NXE bit in the EFER MSR so that the NX bit can
be set in page tables. Otherwise, the NX bit is treated
as reserved and leads to a fault if set.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-10 13:41:02 -07:00
Andrew Boie
7f715bd113 intel64: add inline definition of z_arch_switch()
This just calls the assembly code. Needed to be
inline per the architecture API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-09 09:14:18 -04:00
Charles E. Youse
f7cfb4303b arch/x86: do not assume MP means SMP
It's possible to have multiple processors configured without using the
SMP scheduler, so don't make definitions dependent on CONFIG_SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
e6a31a9e89 arch/x86: (Intel64) initialize TSS interrupt stack from cpuboot[]
In non-SMP MP situations, the interrupt stacks might not exist, so
do not assume they do. Instead, initialize the TSS IST1 from the
cpuboot[] vector (meaning, on APs, the stack from z_arch_start_cpu).
Eliminates redundancy at the same time.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
643661cb44 arch/x86: declare z_x86_prep_c() in kernel_arch_func.h
And remove the ad hoc prototype in cpu.c for Intel64.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
5abab591c2 arch/x86: (Intel64) make z_arch_start_cpu() synchronous
Don't leave z_arch_start_cpu() until the target CPU has been started.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
3b145c0d4b arch/x86: (Intel64) do not lock interrupts around irq_offload()
This is the Wrong Thing(tm) with SMP enabled. Previously this
worked because interrupts would be re-enabled in the interrupt
entry sequence, but this is no longer the case.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
66510db98c arch/x86: (Intel64) add scheduler IPI support
Add z_arch_sched_ipi() and such to enable scheduler IPIs when SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
74e3717af6 arch/x86: (Intel64) fix conditional assembly in locore.S
was ignoring the rest of the expression, though the effect was
harmless (including unreachable code in some builds).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
3eb1a8b59a arch/x86: (Intel64) implement SMP support
Add duplicate per-CPU data structures (x86_cpuboot, tss, stacks, etc.)
for up to 4 total CPUs, add code in locore and z_arch_start_cpu().

The test board, qemu_x86_long, now defaults to 2 CPUs.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
2808908816 arch/x86: alter signature of z_x86_prep_c() function
Take a dummy first argument, so that the BSP entry point (z_x86_prep_c)
has the same signature as the AP entry point (smp_init_top).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
f9eaee35b8 arch/x86: (Intel64) use per-CPU parameter struct for CPU startup
A new 'struct x86_cpuboot' is created as well as an instance called
'x86_cpuboot[]' which contains per-CPU boot data (initial stack,
entry function/arg, selectors, etc.). The locore now consults this
table to set up per-CPU registers, etc. during early boot.

Also, rename tss.c to cpu.c as its scope is growing.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
edf5761c83 arch/x86: (Intel64) rename kernel segment constants
There's no need to qualify the 64-bit CS/DS selectors, and the GS and
TR selectors are renamed CPU0_GS and CPU0_TR as they are CPU-specific.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
90bf0da332 arch/x86: (Intel64) optimize and re-order startup assembly sequence
In some places the code was being overly pedantic; e.g., there is no
need to load our own 32-bit descriptors because the loader's are fine
for our purposes. We can defer loading our own segments until 64-bit.

The sequence is re-ordered to faciliate code sharing between the BSP
and APs when SMP is enabled (all BSP-specific operations occur before
the per-CPU initialization).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
17e135bc41 arch/x86: (Intel64) clear BSS before entering long mode
This is really just to facilitate CPU bootstrap code between
the BSP and the APs, moving the clear operation out of the way.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
a981f51fe6 arch/x86: drivers/loapic_intr.c: move local APIC initialization
In the general case, the local APIC can't be treated as a normal device
with a single boot-time initialization - on SMP systems, each CPU must
initialize its own. Hence the initialization proper is separated from
the device-driver initialization, and said initialization is called
from the early startup-assembly code when appropriate.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
418e5c1b38 arch/x86: factor out common assembly startup code
The 32-bit and 64-bit assembly startup sequences share quite a
bunch of common code, so it's factored out into one file to avoid
repeating ourselves (and potentially falling out of sync).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
25a7cc1136 arch/x86: (Intel64) add missing linker symbols
The linker script was missing symbols that defined the boundaries
of kernel memory segments (_image_rom_end, etc.). These are added
so that core/memmap.c can properly account for those segments.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
8279af76c8 arch/x86: elevate prep_c.c to common code
Elevate the previously 32-bit-only z_x86_prep_c() function to common
code, so both 32-bit and 64-bit arches now enter the kernel this way.
Minor changes to prep_c.c to make it build with the SMP scheduler on.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse
cc9be2e982 arch/x86: (Intel64) start up on _interrupt_stack, not _exception_stack
Simply for consistency with other platforms.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Andrew Boie
f0ddbd7eee x86: abstract toplevel page table pointer
This patch is a preparatory step in enabling the MMU in
long mode; no steps are taken to implement long mode support.

We introduce struct x86_page_tables, which represents the
top-level data structure for page tables:

- For 32-bit, this will contain a four-entry page directory
  pointer table (PDPT)
- For 64-bit, this will (eventually) contain a page map level 4
  table (PML4)

In either case, this pointer value is what gets programmed into
CR3 to activate a set of page tables. There are extra bits in
CR3 to set for long mode, we'll get around to that later.

This abstraction will allow us to use the same APIs that work
with page tables in either mode, rather than hard-coding that
the top level data structure is a PDPT.

z_x86_mmu_validate() has been re-written to make it easier to
add another level of paging for long mode, to support 2MB
PDPT entries, and correctly validate regions which span PDPTE
entries.

Some MMU-related APIs moved out of 32-bit x86's arch.h into
mmustructs.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-04 15:53:49 -07:00
Andrew Boie
13433fcc4b x86: fix EXCEPTION_DEBUG dependency
This requires logging, not printk.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-01 16:15:55 -05:00
Andrew Boie
89d4c6928e kernel: add arch abstraction for irq_offload()
This makes it clearer that this is an API that is expected
to be implemented at the architecture level.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-01 11:11:42 +02:00
Andrew Boie
2c1fb971e0 kernel: rename __swap
This is part of the core kernel -> architecture API and
has been renamed to z_arch_swap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
fa22ecffec x86: remove redunant idle timestamp setting
This metric shows when the system first enters an idle
state, which has already been recorded in the arch-
independent implementation of the idle thread.

Only x86 was doing this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
f6fb634b89 kernel: rename kernel_arch_init()
This is part of the core kernel -> architecture interface and
has been renamed z_arch_kernel_init().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
4ad9f687df kernel: rename thread return value functions
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.

z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
07525a3d54 kernel: add arch interface for idle functions
k_cpu_idle() and k_cpu_atomic_idle() were being directly
implemented by arch code.

Rename these implementations to z_arch_cpu_idle() and
z_arch_cpu_atomic_idle(), and call them from new inline
function definitions in kernel.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
e1ec59f9c2 kernel: renamespace z_is_in_isr()
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().

References from test cases changed to k_is_in_isr().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
61901ccb4c kernel: rename z_new_thread()
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie
9e1dda8804 timing_info: rename globals
Global variables related to timing information have been
renamed to be prefixed with z_arch, with naming arranged
in increasing order of specificity.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Charles E. Youse
7637571871 arch/x86: rename CONFIG_X86_ACPI and related to CONFIG_ACPI
ACPI is predominantly x86, and only currently implemented on x86,
but it is employed on other architectures, so rename accordingly.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse
200056df2f arch/x86: rename CONFIG_X86_MULTIBOOT and related to CONFIG_MULTIBOOT
Simple naming change, since MULTIBOOT is clear enough by itself and
"namespacing" it to X86 is unnecessary and/or inappropriate.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse
2cf52476ea arch/x86: add support for non-trivial memory maps
x86 has more complex memory maps than most Zephyr targets. A mechanism
is introduced here to manage such a map, and some methods are provided
to populate it (e.g., Multiboot).

The x86_info tool is extended to display memory map data.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse
81eeff83b0 arch/x86: multiboot: migrate multiboot initialization to early C
Originally, the multiboot info struct was copied in the early assembly
language code. This code is moved to a C function in multiboot.c for
two reasons:

1. It's about to get more complicated, as we want the ability to use
   a multiboot-provided memory map if available, and
2. this will faciliate its sharing between 32- and 64-bit subarches.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse
1ffab8a5f2 arch/x86: rudimentary ACPI support
Implement a simple ACPI parser with enough functionality to
enumerate CPU cores and determine their local APIC IDs.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Mrinal Sen
1246cb8cef debug: tracing: Remove unneeded abstraction
Various C and Assembly modules
make function calls to z_sys_trace_*. These merely call
corresponding functions sys_trace_*. This commit
is to simplify these by making direct function calls
to the sys_trace_* functions from these modules.
Subsequently, the z_sys_trace_* functions are removed.

Signed-off-by: Mrinal Sen <msen@oticon.com>
2019-09-26 06:26:22 -04:00
Charles E. Youse
66a2ed2360 arch/x86: (Intel64) move RAX to volatile register set
This used to be part of the "restore always" set of registers because
__swap was expected to return a value.  No longer required, so RAX is
moved to the volatile registers and we save a few cycles occasionally.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
074ce889fb arch/x86: (Intel64) migrate from __swap to z_arch_switch()
The latter primitive is required for SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
32fc239aa2 arch/x86: (Intel64) use TSS for per-CPU variables
A space is allocated in the TSS for per-CPU variables. At present,
this is only a 'struct _cpu *' to find the _kernel CPU struct. The
locore routines are rewritten to find _current and _nested via this
pointer rather than referencing the _kernel global directly.

This is obviously in preparation for SMP support.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
1d8c80bc05 arch/x86: (Intel64) move STACK_SENTINEL check
This function call was erroneously inserted between the instruction
that set the Z flag and the instruction that tested the Z flag. The
call is moved up a few instructions where it can't junk CPU state.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
e4d5ab363c arch/x86: (Intel64) define TSS in C, not assembly
Declare the 64-bit TSS as a struct, and define the instance in C.
Add a data segment selector that overlaps the TSS and keep that
loaded in GS so we can access the TSS via a segment-override prefix.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
a8de9577c9 arch/x86: restructure ISR stacks (conceptually)
This is largely a conceptual change rather than an actual change.
Instead of using an array of interrupt stacks (one for each IRQ
nesting level), we use one interrupt stack and subdivide it. The
effect is the same, but this is more in line with the Zephyr model
of one ISR stack per CPU (as reflected in init.c).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
bd094ddac2 arch/x86: inline x2APIC EOI in 64-bit code
Like its 32-bit sibling, the 64-bit code should EOI inline rather than
invoking a function. Defeats the performance advantages of x2APIC.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse
3036faf88a tests/benchmarks: fix BOOT_TIME_MEASUREMENT
The boot time measurement sample was giving bogus values on x86: an
assumption was made that the system timer is in sync with the CPU TSC,
which is not the case on most x86 boards.

Boot time measurements are no longer permitted unless the timer source
is the local APIC. To avoid issues of TSC scaling, the startup datum
has been forced to 0, which is in line with the ARM implementation
(which is the only other platform which supports this feature).

Cleanups along the way:

As the datum is now assumed zero, some variables are removed and
calculations simplified. The global variables involved in boot time
measurements are moved to the kernel.h header rather than being
redeclared in every place they are referenced. Since none of the
measurements actually use 64-bit precision, the samples are reduced
to 32-bit quantities.

In addition, this feature has been enabled in long mode.

Fixes: #19144

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-21 16:43:26 -07:00
Charles E. Youse
a95c94cfe2 arch/x86/ia32: move IA32 thread state to _thread_arch
There are not enough bits in k_thread.thread_state with SMP enabled,
and the field is (should be) private to the scheduler, anyway. So
move state bits to the _thread_arch where they belong.

While we're at it, refactor some offset data w/r/t _thread_arch
because it can be shared between 32- and 64-bit subarches.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-20 14:31:18 -04:00
Charles E. Youse
a224998355 arch/x86/intel64: do not use thread_state for arch data
k_thread.thread_state (or rather, _thread_base.thread_state) should be
private to the kernel/scheduler, so flags previously stored there are
moved to _thread_arch where the belong.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-20 14:31:18 -04:00
Jan Van Winkel
f3eec6cba3 cmake: toolchain abstraction for coverage
Added toolchain abstraction for coverage for both gcc and clang.

Signed-off-by: Jan Van Winkel <jan.van_winkel@dxplore.eu>
2019-09-17 11:25:29 +02:00
Charles E. Youse
dc0314af7f arch/x86: honor CONFIG_INIT_STACKS in 64-bit mode
Initialize the IRQ stacks with 0xAA bytes when the option is enabled.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
d506489999 arch/x86: optimize nested IRQ entry/exit
We don't need to save the ABI caller-save registers here, because
we don't preempt threads from nested IRQ contexts.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
468cd4d98f arch/x86: add support for CONFIG_STACK_SENTINEL
Apparently I missed the arch-dependent bit of this. Fixed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
a5eea17dda arch/x86: add SSE floating-point to Intel64 subarch
This is a naive implementation which does "eager" context switching
for floating-point context, which, of course, introduces performance
concerns. Other approaches have security concerns, SMP implications,
and impact the x86 arch and Zephyr project as a whole. Discussion is
needed, so punting with the straightforward solution for now.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
2bb59fc84e arch/x86: add nested interrupt support to Intel64
Add support for multiple IRQ stacks and interrupt nesting.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
cdb9ac3895 arch/x86: Add exception reporting code for Intel64
Fleshed out z_arch_esf_t and added code to build this frame when
exceptions occur. Created a separate small stack for exceptions and
shifted the initialization code to use this instead of the IRQ stack.

Moved IRQ stack(s) to irq.c.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
a10f2601cc arch/x86: add IRQ offloading to Intel64 subarch
The IRQ_OFFLOAD_VECTOR config option is also moved to the arch level,
as it is shared between both 32- and 64-bit subarches.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
0e0199387a arch/x86: set default stack sizes
Using the arch Kconfig here, instead of kernel/Kconfig. Intel64 with
the SysV ABI requires some pretty big stacks. These 4K-8K defaults
are arguably a bit small, but the Zephyr defaults are REALLY too small.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
4ddaa59a89 arch/x86: initial Intel64 support
First "complete" version of Intel64 support for x86. Compilation of
apps for supported boards (read: up_squared) with CONFIG_X86_LONGMODE=y
is now working. Booting, device drivers, interrupts, scheduling, etc.
appear to be functioning properly. Beware that this is ALHPA quality,
not ready for production use, but the port has advanced far enough that
it's time to start working through the test suite and samples, fleshing
out any missing features, and squashing bugs.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
58bbbddbef arch/x86: fix multiboot.c pointer cast
Widen the integer to pointer size before conversion, to make
explicit the intent (and silence the compiler warning). Also
fix a minor bug involving a duplicate (and thus dead) store.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
34307a54f0 arch/x86: initial Intel64 bootstrap framework
This patch adds basic build infrastructure, definitions, a linker
script, etc. to use the Zephyr and 0.10.1 SDK to build a 64-bit
ELF binary suitable for use with GRUB to minimally bootstrap an
Apollo Lake (e.g., UpSquared) board. The resulting binary can hardly
be called a Zephyr kernel as it is lacking most of the glue logic,
but it is a starting point to flesh those out in the x86 tree.

The "kernel" builds with a few harmless warnings, both with GCC from
the Zephyr SDK and with ICC (which is currently being worked on in
a separate branch). These warnings are either related to pointer size
differences (since this is an LP64 build) and/or dummy functions
that will be replaced with working versions shortly.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
c18f028366 arch/x86: refactor offsets.c
The IA32 and Intel64 subarchitectures will generate different offset
symbols, so they are refactored. No functional change.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
d25ef6ed44 arch/x86/pcie: use Z_IRQ_TO_INTERRUPT_VECTOR() macro
The _irq_to_interrupt_vector[] array shouldn't be accessed directly,
as there is a macro for this.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse
6cf904cc0d arch/x86: rename X64 references to Intel64
Intel's X86-64 implementation is officially "Intel64", so follow suit.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Andrew Boie
a470ba1999 kernel: remove z_fatal_print()
Use LOG_ERR instead.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-12 05:17:39 -04:00
Charles E. Youse
ee525c2597 arch/x86: inline x2APIC EOI
From the Jailhouse days, this has been a function call. That's silly.
We now inline the EOI in the ISR when in x2APIC mode. Also clean up
z_irq_controller_eoi(), so it now uses the inline macros.

Also, we now enable x2APIC on up_squared by default.

Fixes: #17133

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-12 09:53:45 +08:00
Charles E. Youse
6767563f94 arch/x86: remove support for IAMCU ABI
This ABI is no longer required by any targets and is deprecated.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-07 10:07:42 -04:00
Andrew Boie
ec5eb8f7d4 x86: add build assert that RAM bounds <= 4GB
DTS will fail this first, but there's no cost to adding
a second level of defense.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-07 12:50:53 -07:00
Andrew Boie
02629b69b5 x86: add prep_c function
Assembly language start code will enter here, which sets up
early kernel initialization and then calls z_cstart() when
finished.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-07 12:50:53 -07:00
Andrew Boie
c3b3aafaec x86: generate page tables at runtime
Removes very complex boot-time generation of page tables
with a much simpler runtime generation of them at bootup.

For those x86 boards that enable the MMU in the defconfig,
set the number of page pool pages appropriately.

The MMU_RUNTIME_* flags have been removed. They were an
artifact of the old page table generation and did not
correspond to any hardware state.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-07 12:50:53 -07:00
Andrew Boie
0add92523c x86: use a struct to specify stack layout
Makes the code that defines stacks, and code referencing
areas within the stack object, much clearer.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
8014e075f4 x86: use per-thread page tables
Previously, context switching on x86 with memory protection
enabled involved walking the page tables, de-configuring all
the partitions in the outgoing thread's memory domain, and
then configuring all the partitions in the incoming thread's
domain, on a global set of page tables.

We now have a much faster design. Each thread has reserved in
its stack object a number of pages to store page directories
and page tables pertaining to the system RAM area. Each
thread also has a toplevel PDPT which is configured to use
the per-thread tables for system RAM, and the global tables
for the rest of the address space.

The result of this is on context switch, at most we just have
to update the CR3 register to the incoming thread's PDPT.

The x86_mmu_api test was making too many assumptions and has
been adjusted to work with the new design.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
8915e41b7b userspace: adjust arch memory domain interface
The current API was assuming too much, in that it expected that
arch-specific memory domain configuration is only maintained
in some global area, and updates to domains that are not currently
active have no effect.

This was true when all memory domain state was tracked in page
tables or MPU registers, but no longer works when arch-specific
memory management information is stored in thread-specific areas.

This is needed for: #13441 #13074 #15135

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
0c3e05ae7c x86: report CR3 on fatal exception
Lets us know what set of page tables were in use when
the error occurred.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
ea201b206f x86: add debug functions for dumping page tables
These turned out to be quite useful when debugging MMU
issues, commit them to the tree. The output format is
virtually the same as gen_mmu_x86.py's verbose output.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
26dccaabcb x86: reserve room for per-thread page tables
Currently page tables have to be re-computed in
an expensive operation on context switch. Here we
reserve some room in the page tables such that
we can have per-thread page table data, which will
be much simpler to update on context switch at
the expense of memory.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
76310f6896 x86: make guard pages ro instead of non-present
Has the same effect of catching stack overflows, but
makes debugging with GDB simpler since we won't get
errors when inspecting such regions. Making these
areas non-present was more than we needed, read-only
is sufficient.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie
bd709c7322 x86: support very early printk() if desired
Adapted from similar code in the x86_64 port.
Useful when debugging boot problems on actual x86
hardware if a JTAG isn't handy or feasible.

Turn this on for qemu_x86.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-02 00:29:21 -07:00
Charles E. Youse
8e437adc42 thread.c: remove vestigial CONFIG_INIT_STACKS cruft
It looks like, at some point in the past, initializing thread stacks
was the responsibility of the arch layer. After that was centralized,
we forgot to remove the related conditional header inclusion. Fixed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-31 09:15:45 +03:00
Anas Nashif
cb412df725 x86: remove code for interrupt forwarding bug
This only applied to quark_se, so removing it.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-07-29 21:30:25 -07:00
Charles E. Youse
f7a0dce636 arch/x86: remove support for CONFIG_REALMODE
We no longer support any platforms that bootstrap from real mode.

Fixes: #17166

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-29 21:29:38 -07:00
Andrew Boie
96571a8c40 kernel: rename NANO_ESF
This is now called z_arch_esf_t, conforming to our naming
convention.

This needs to remain a typedef due to how our offset generation
header mechanism works.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
8a9e8e0cd7 kernel: support log system for fatal errors
We introduce a new z_fatal_print() API and replace all
occurrences of exception handling code to use it.
This routes messages to the logging subsystem if enabled.
Otherwise, messages are sent to printk().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
5623637a48 kernel: abolish _default_esf
NANO_ESF parameters may now be NULL, indicating that no
exception frame is available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
71ce8ceb18 kernel: consolidate error handling code
* z_NanoFatalErrorHandler() is now moved to common kernel code
  and renamed z_fatal_error(). Arches dump arch-specific info
  before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
  and renamed k_sys_fatal_error_handler(). It is now much simpler;
  the default policy is simply to lock interrupts and halt the system.
  If an implementation of this function returns, then the currently
  running thread is aborted.
* New arch-specific APIs introduced:
  - z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
  namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Andrew Boie
caa47e6c97 x86: allow user mode to induce kernel oops
Before, attempting to induce a kernel oops would instead
lead to a general protection fault as the interrupt vector
was at DPL=0.

Now we allow by setting DPL=3. We restrict the allowable
reason codes to either stack overflows or kernel oops; we
don't want user mode to be able to create a kernel panic,
or fake some other kind of exception.

Fixes an issue where the stack canary test case was triggering
a GPF instead of a stack check exception on x86.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-16 18:09:49 -07:00
Ioannis Glaropoulos
85f7aeeced arch: x86: make z_arch_float_disable return -ENOSYS if not supported
For the x86 architecture the z_arch_float_disable() is only
implemented when building with CONFIG_LAZY_FP_SHARING, so we
make z_arch_float_disable() return -ENOSYS when we build with
FLOAT and FP_SHARING but on an x86 platform where
LAZY_FP_SHARING is not supported.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-10 13:44:02 -07:00
Charles E. Youse
0fb9d3450b arch/x86: move exception.h to ia32/exception.h
This file is currently 32-bit specific. Move it and references to it.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
3ff2746857 arch/x86: eliminate cache_private.h
This file merely declares external functions referenced only
by ia32/cache.c, so the declarations are inlined instead.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
589b86f534 arch/x86: remove swapstk.h and references to it
This file was used to generate offsets for host tools that are no
longer in use, so it's removed and the offsets are no longer generated.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
b4316fef48 arch/x86: eliminate arch/x86/include/asm_inline.h
Over time, this has been reduced to a few functions dealing solely
with floating-point support, referenced only from core/ia32/float.c.
Thus they are moved into that file and the header is eliminated.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
7c2d7d7b69 arch/x86: move arch/x86/include/mmustructs.h to ia32/mmustructs.h
For now, only the 32-bit subarchitecture supports memory protection.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-03 20:01:17 -04:00
Charles E. Youse
0325a3d972 arch/x86: eliminate include/arch/x86/irq_controller.h
The MVIC is no longer supported, and only the APIC-based interrupt
subsystem remains. Thus this layer of indirection is unnecessary.

This also corrects an oversight left over from the Jailhouse x2APIC
implementation affecting EOI delivery for direct ISRs only.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
930e6af999 arch/x86: move include/arch/x86/segmentation.h to ia32/segmentation.h
This header is currently IA32-specific, so move it into the subarch
directory and update references to it.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
dff016b53c arch/x86: move include/arch/x86/arch.h to ia32/arch.h
Making room for the Intel64 subarch in this tree. This header is
32-bit specific and so it's relocated, and references rewritten
to find it in its new location.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
6f3009ecf0 arch/x86: move include/arch/x86/asm.h to include/arch/x86/ia32/asm.h
This file is 32-bit specific, so it is moved into the ia32/ directory
and references to it are updated accordingly.

Also, SP_ARG* definitions are no longer used, so they are removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Charles E. Youse
c7bc7a8c86 arch/x86: clean up model-specific register definitions in msr.h
Eliminate definitions for MSRs that we don't use. Centralize the
definitions for the MSRs that we do use, including their fields.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-07-02 19:30:00 -04:00
Andrew Boie
a3a89ed9d5 x86: only use lfence if x86 bcb config enabled
Work around a testcase problem, where we want to check some
logic for the bounds check bypass mitigation in the common
kernel code. By changing the ifdef to the x86-specific option
for these lfence instructions, we avoid IAMCU build errors
but still test the common code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-06-30 09:22:09 -04:00
Anas Nashif
5b0aa794b2 cleanup: include/: move misc/reboot.h to power/reboot.h
move misc/reboot.h to power/reboot.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
a2fd7d70ec cleanup: include/: move misc/util.h to sys/util.h
move misc/util.h to sys/util.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
9ab2a56751 cleanup: include/: move misc/printk.h to sys/printk.h
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
5eb90ec169 cleanup: include/: move misc/__assert.h to sys/__assert.h
move misc/__assert.h to sys/__assert.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif
10291a0789 cleanup: include/: move tracing.h to debug/tracing.h
move tracing.h to debug/tracing.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Charles E. Youse
a506aa3dfb arch/x86: remove CONFIG_X86_FIXED_IRQ_MAPPING support
This was only enabled by the MVIC, which in turn was only used
by the Quark D2000, which has been removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-25 08:06:43 -04:00
Charles E. Youse
3dc7c7a6ea drivers/interrupt_controller/mvic.c: remove MVIC interrupt controller
The Quark D2000 is the only x86 with an MVIC, and since support for
it has been dropped, the interrupt controller is orphaned. Removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-25 08:06:43 -04:00
Charles E. Youse
ef736f77c2 arch/x86: relocate and rename SYS_X86_RST_* constants
These constants do not need global exposure, as they're only
referenced in the reboot API implementation. Also their names
are trimmed to fit into the X86-arch-specific namespace.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-24 07:46:24 -07:00
Charles E. Youse
4bdbd879ef arch/x86: remove old PRINTK() debugging macro
This appears to date all the way back to the initial import
and is used in exactly one place if DEBUG is on. Removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-24 07:46:24 -07:00
Charles E. Youse
2835c22985 arch/x86: used fixed initial EFLAGS on thread creation
Previously the existing EFLAGS was used as a base which was
then manipulated accordingly. This is unnecessary as the bits
preserved contain no useful state related to the new thread.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-24 07:46:24 -07:00
Anas Nashif
f2cb20c772 docs: fix misspelling across the tree
Found a few annoying typos and figured I better run script and
fix anything it can find, here are the results...

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-19 15:34:13 -05:00
David B. Kinder
2aebc980e2 doc: fix Kconfig misspellings
Fix misspellings in Kconfig files missed during regular reviews.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2019-06-18 15:07:52 -04:00
Charles E. Youse
1444ee970e arch/x86: reorganize core source files
Create source directory for IA32-subarch specific files, and move
qualifying files to that subdirectory.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-17 16:31:37 -04:00
Charles E. Youse
8f14b2ed86 arch/x86: split CMakeLists.txt into subarch-specific files
Separate common, ia32-specific, and x64-specific into separate files.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-17 16:31:37 -04:00
Charles E. Youse
d2b33a486b arch/x86: split Kconfig files by sub-architecture
Separate common, ia32 (32-bit) and x64 (64-bit long mode) options.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-17 16:31:37 -04:00
Ioannis Glaropoulos
d840d1cbb5 arch: implement arch-specific float disable routines
This commit adds the architecture-specific implementation
of k_float_disable() for ARM and x86.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-06-12 09:17:45 -07:00
Charles E. Youse
ba516e8ea8 arch/x86: do not redefine MSR regs in crt0.S
The real-mode startup code is trivially changed to refer to MSR
definitions in include/arch/x86/msr.h, rather than its ad-hoc ones.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-08 15:41:36 -04:00
Charles E. Youse
0e166fa2a8 arch/x86: move MSR definitions to include/arch/x86/msr.h
Light reorganization. All MSR definitions and manipulation functions
are consolidated into one header. The names are changed to use an
X86_* prefix instead of IA32_* which is misleading/incorrect.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-08 15:41:36 -04:00
Charles E. Youse
6aedb6ff1a arch/x86: disable i8259 in crt0.S
drivers/interrupt_controller/i8259.c is not a driver; it exists
solely to disable the i8259s when the configuration calls for it.
The six-byte sequence to mask the controllers is moved to crt0.S
and the pseudo-driver is removed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-08 15:41:36 -04:00
Charles E. Youse
4c63e29aec arch/x86: drivers/display: add framebuffer driver w/ multiboot support
A basic display driver is added for a generic 32-bpp framebuffer.
Glue logic is added to the x86 arch to request the intitialization
of a linear framebuffer by the Multiboot loader (GRUB) and connect
it to this generic driver.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-06 10:47:29 -07:00
Charles E. Youse
a1a3a4fced arch/x86: add support for Multiboot boot information structure
When booting using GRUB, some useful information about the environment
is given to us via a boot information structure. We've not made any
use of this information so far, but the x86 framebuffer driver will.

A skeletal definition of the structure is given, and provisions are
made to preserve its contents at boot if the configuration requires it.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-06-06 10:47:29 -07:00
Anas Nashif
76d9d7806d x86: remove unused and x86 only latency benchmark
We do have a multi-architecture latency benchmark now, this one was x86
only, was never used or compiled in and is out-dated.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-03 09:42:00 -07:00
Charles E. Youse
50c71e6043 arch/x86: CONFIG_BOOTLOADER_UNKNOWN renamed to CONFIG_X86_MULTIBOOT
The only use of the BOOTLOADER_UNKNOWN config option is on x86, where
it controls whether a multiboot header is embedded in the output.
This patch renames the option to be more descriptive, and makes it
an x86-specific option, rather than a Zephyr top-level option.

This also enables X86_MULTIBOOT by default, since the header only
occupies 12-16 bytes of memory and is (almost always) harmless.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-05-08 14:49:19 -04:00
Ioannis Glaropoulos
873dd10ea4 kernel: mem_domain: update name/doc of API function for partition add
Update the name of mem-domain API function to add a partition
so that it complies with the 'z_' prefix convention. Correct
the function documentation.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-05-02 11:37:38 -04:00
Ioannis Glaropoulos
53f0f277b0 arch: x86: mmu: typo fixes
Typo fixes in functions' documentation.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-05-02 11:37:38 -04:00
Paul Sokolovsky
ec1ffc8bcf arch: x86: fatal: If possible, print thread name in crash dump
It's relatively hard to figure out what thread a crash happens in
from the crash dump. E.g, it's usually not immediately possible to
find it out from linker map due to the fact that static symbols are
not there (https://sourceware.org/bugzilla/show_bug.cgi?id=16566).

So, try to do it as easy if possible, by just printing thread name
in a dump, if thread names are enabled at all.

Signed-off-by: Paul Sokolovsky <paul.sokolovsky@linaro.org>
2019-04-24 09:01:07 -07:00
Charles E. Youse
e039053546 uart/ns16550, drivers/pcie: add PCI(e) support
A parallel PCI implementation ("pcie") is added with features for PCIe.
In particular, message-signaled interrupts (MSI) are supported, which
are essential to the use of any non-trivial PCIe device.

The NS16550 UART driver is modified to use pcie.

pcie is a complete replacement for the old PCI support ("pci"). It is
smaller, by an order of magnitude, and cleaner. Both pci and pcie can
(and do) coexist in the same builds, but the intent is to rework any
existing drivers that depend on pci and ultimately remove pci entirely.

This patch is large, but things in mirror are smaller than they appear.
Most of the modified files are configuration-related, and are changed
only slightly to accommodate the modified UART driver.

Deficiencies:

64-bit support is minimal. The code works fine with 64-bit capable
devices, but will not cooperate with MMIO regions (or MSI targets) that
have high bits set. This is not needed on any current boards, and is
unlikely to be needed in the future. Only superficial changes would
be required if we change our minds.

The method specifying PCI endpoints in devicetree is somewhat kludgey.
The "right" way would be to hang PCI devices off a topological tree;
while this would be more aesthetically pleasing, I don't think it's
worth the effort, given our non-standard use of devicetree.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-04-17 10:50:05 -07:00
Anas Nashif
3ae52624ff license: cleanup: add SPDX Apache-2.0 license identifier
Update the files which contain no license information with the
'Apache-2.0' SPDX license identifier.  Many source files in the tree are
missing licensing information, which makes it harder for compliance
tools to determine the correct license.

By default all files without license information are under the default
license of Zephyr, which is Apache version 2.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-04-07 08:45:22 -04:00
Adithya Baglody
b33dd7ebde tests: benchmark: timing_info: Fixed incorrect results.
The results were incorrect because the timer was firing the
interrupts before the measurement was made.

Fixes: GH-14556

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-04-05 16:10:27 -04:00
Andrew Boie
4e5c093e66 kernel: demote K_THREAD_STACK_BUFFER() to private
This macro is slated for complete removal, as it's not possible
on arches with an MPU stack guard to know the true buffer bounds
without also knowing the runtime state of its associated thread.

As removing this completely would be invasive to where we are
in the 1.14 release, demote to a private kernel Z_ API instead.
The current way that the macro is being used internally will
not cause any undue harm, we just don't want any external code
depending on it.

The final work to remove this (and overhaul stack specification in
general) will take place in 1.15 in the context of #14269

Fixes: #14766

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-04-05 16:10:02 -04:00
Patrik Flykt
7c0a245d32 arch: Rename reserved function names
Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-04-03 17:31:00 -04:00
Flavio Ceolin
b80c3d9c77 arch: x86: Remove not used fp struct
The legacy struct s_coopFloatReg was never being used, though it was
an empty struct (not wasting space), some symbols were being generate
for it.

Nevertheless, neither C99 nor C11 allow empty structs, so this
was also a violation to the C standards.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-04-03 12:06:31 -04:00
Patrik Flykt
24d71431e9 all: Add 'U' suffix when using unsigned variables
Add a 'U' suffix to values when computing and comparing against
unsigned variables.

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-28 17:15:58 -05:00
Flavio Ceolin
39a50f6392 arch: x86: Use proper essential types in operands
MISRA defines a serie of essential types, boolean, signed/unsigned
integers, float, ... and operations must respect these essential types.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Flavio Ceolin
d410611180 arch: Use macro BIT for shift operations
BIT macro uses an unsigned int avoiding implementation-defined behavior
when shifting signed types.

MISRA-C rule 10.1

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-26 14:31:29 -04:00
Daniel Leung
c31e659165 codecov: avoid inlining functions for correct execution counts
This adds a compiler option -fno-inline for code coverage on
architectures which supports doing code coverage. This also
modifies the ALWAYS_INLINE macro to not do any inlining. This
needs to be done so code coverage can count the number of
executions to the correct lines.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2019-03-26 13:28:30 -04:00
Piotr Mienkowski
17b08ceca5 power: clean up system power managment function names
This commit cleans up names of system power management functions by
assuring that:
- all functions start with 'sys_pm_' prefix
- API functions which should not be exposed to the user start with '_'
- name of the function hints at its purpose

Signed-off-by: Piotr Mienkowski <piotr.mienkowski@gmail.com>
2019-03-26 13:27:55 -04:00
Andrew Boie
50d72ed9c9 x86: implement eager FP save/restore
Speculative execution side channel attacks can read the
entire FPU/SIMD register state on affected Intel Core
processors, see CVE-2018-3665.

We now have two options for managing floating point
context between threads on x86: CONFIG_EAGER_FP_SHARING
and CONFIG_LAZY_FP_SHARING.

The mitigation is to unconditionally save/restore these
registers on context switch, instead of the lazy sharing
algorithm used by CONFIG_LAZY_FP_SHARING.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-11 20:36:55 -07:00
Patrik Flykt
4344e27c26 all: Update reserved function names
Update reserved function names starting with one underscore, replacing
them as follows:
   '_k_' with 'z_'
   '_K_' with 'Z_'
   '_handler_' with 'z_handl_'
   '_Cstart' with 'z_cstart'
   '_Swap' with 'z_swap'

This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.

Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.

Various generator scripts have also been updated as well as perf,
linker and usb files. These are
   drivers/serial/uart_handlers.c
   include/linker/kobject-text.ld
   kernel/include/syscall_handler.h
   scripts/gen_kobject_list.py
   scripts/gen_syscall_header.py

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2019-03-11 13:48:42 -04:00
Andrew Boie
82205e61e7 x86: fix Spectre V1 index checks
We add two points where we add lfences to disable
speculation:

* In the memory buffer validation code, which takes memory
  addresses and sizes from userspace and determins whether
  this memory is actually accessible.

* In the system call landing site, after the system call ID
  has been validated but before it is used.

Kconfigs have been added to enable these checks if the CPU
is not known to be immune on X86.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-11 09:54:04 -07:00
Flavio Ceolin
48c6548d6a x86: core: Remove extra parenthesis
Extra parenthis was raising a warning when building using Clang/llvm

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-03-06 22:40:25 -05:00
Andrew Boie
6dc3fd8e50 userspace: fix x86 issue with adding partitions
On x86, if a supervisor thread belonging to a memory domain
adds a new partition to that domain, subsequent context switches
to another thread in the same domain, or dropping itself to user
mode, does not have the correct setup in the page tables.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-03 23:44:13 -05:00
Andrew Boie
1459bed346 x86: fix pte corruption when setting large regions
We need a copy of the flags field for ever PTE we are
updating, we can't just keep OR-ing in the address
field.

Fixes issues seen when setting flags for memory regions
larger than a page.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-03 22:34:25 -05:00
Andrew Boie
6c8825fc96 x86: mitigate L1 Terminal Fault vulnerability
During speculative execution, non-present pages are treated
as valid, which may expose their contents through side
channels.

Any non-present PTE will now have its address bits zeroed,
such that any speculative reads to them will go to the NULL
page.

The expected hit on performance is so minor that this is
enabled at all times.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-01 15:22:41 -08:00
Andrew Boie
10040f603d x86: enable Extended IBRS
This is a CPU mitigation feature for Spectre V2 attacks

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-01 12:35:04 -08:00
Andrew Boie
e27ce67d25 x86: fix SSBD feature bit
The CPUID level 7 bit for SSBD is 31, not 26.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-01 12:35:04 -08:00
Andrew Boie
d3c89fea4f kernel: move CONFIG_RETPOLINE definition
Retpolines were never completely implemented, even on x86.
Move this particular Kconfig to only concern itself with
the assembly code, and don't default it on ever since we
prefer SSBD instead.

We can restore the common kernel-wide CONFIG_RETPOLINE once
we have an end-to-end implementation.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-03-01 12:35:04 -08:00
Andrew Boie
ad2d7ca081 x86: fix page directory out of bounds
PAE page tables (the only kind we support) have 512
entries per page directory, not 1024.

Fixes: #13838

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-28 15:03:06 -08:00
Andrew Boie
4ce652e4b2 userspace: remove APP_SHARED_MEM Kconfig
This is an integral part of userspace and cannot be used
on its own. Fold into the main userspace configuration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-23 07:43:55 -05:00
Andrew Boie
5f4683db34 x86: fix ROM permissions
Only the text area now has execute permissions,
instead of both text and rodata.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-15 13:10:18 -08:00
Andrew Boie
65da531aed x86: fix exception stack pointer reporting
If the faulting context is in user mode, then we are
not on the same stack due to HW-level stack switching
on privilege elevation, and the faulting ESP is on
the stack itself.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-15 09:48:37 -05:00
Andrew Boie
21337019b0 x86: get oops reason code more robustly
The code did not consider privilege level stack switches.
We have the original stack pointer in the NANO_ESF,
just use that.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-15 09:48:37 -05:00
Andrew Boie
747dcbc8f2 x86: improve stack overflow detection
We now have a dedicated function to test whether
a memory region is withing the boundary of the
faulting context's stack buffer.

We use this to determine whether a page or double fault
was due to ESP being outside the bounds of the stack,
as well as when unwinding stack frames to print debug
output.

Fixes two issues:
- Stack overflows in user mode being incorrectly reported
  as just page fault exceptions
- Exceptions that occur when unwinding corrupted stacks

The type of fault which triggered the stack overflow
logic (double or page fault) is now always shown.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-15 09:48:37 -05:00
Andrew Boie
62d866385e x86: fix crash in _arch_buffer_validate
The code wasn't checking if the memory address to check
corresponded to a non-present page directory pointer
table entry.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Andrew Boie
2cfeba8507 x86: implement interrupt stack trampoline
Upon hard/soft irq or exception entry/exit, handle transitions
off or onto the trampoline stack, which is the only stack that
can be used on the kernel side when the shadow page table
is active. We swap page tables when on this stack.

Adjustments to page tables are now as follows:

- Any adjustments for stack memory access now are always done
  to the user page tables

- Any adjustments for memory domains are now always done to
  the user page tables

- With KPTI, resetting a page now clears the present bit

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Andrew Boie
f093285345 x86: modify MMU APIs for multiple page tables
Current set of APIs and macros assumed that only one set
of page tables would ever be in use.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Andrew Boie
d2886ab8bc x86: clear EFLAGS on double fault
In the event of a double fault, we do a HW task switch to
a special _df_tss hardware task which resets the stack
pointer to the interrupt stack and otherwise restores
the main hardware task to a runnable state so that
_df_handler_bottom() can run.

However, we need to make sure that _df_handler_bottom()
runs with interrupts locked, otherwise another IRQ could
corrupt the interrupt stack resulting in undefined
behavior.

We have very little stack space to work with in this
context, just zero it. It's a fatal error for the thread
in any event.

Fixes: #7291

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-13 10:58:42 -08:00
Piotr Zięcik
9cc63e07e4 power: Fix naming of Kconfig options controlling deep sleep states
This commit changes the names of SYS_POWER_DEEP_SLEEP* Kconfig
options in order to match SYS_POWER_LOW_POWER_STATE* naming
scheme.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-12 07:46:32 -05:00
Andrew Boie
f087cd0774 x86: fix app_smem MMU permissions
At boot, user threads were being granted access to the entire
app shared memory section. This is incorrect; user threads should
have no access until they are added to a memory domain, which
may contain partitions defined within it.

Change from MMU_ENTRY_USER (which grants permission at boot)
to MMU_ENTRY_RUNTIME_USER (which indicates that the pages may
be granted to user mode at runtime, but not at boot).

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 14:02:31 -08:00
Andy Ross
aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andrew Boie
41f6011c36 userspace: remove APPLICATION_MEMORY feature
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.

To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
71a3b53504 x86: don't automatically configure newlib malloc
This diverges from policy for all of our other arches
and C libraries. Global access to the malloc arena
may not be desirable.

Forthcoming patch will expose, for all C libraries, a
k_mem_partition with the malloc arena which can be
added to domains as desired.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Adithya Baglody
9bebf4cb23 x86: fix app shared memory if XIP enabled
This is a separate data section which needs to be copied into
RAM.

Most arches just use the kernel's _data_copy(), but x86 has its
own optimized copying code.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
2d9bbdf5f3 x86: remove support for non-PAE page tables
PAE tables introduce the NX bit which is very desirable
from a security perspetive, back in 1995.

PAE tables are larger, but we are not targeting x86 memory
protection for RAM constrained devices.

Remove the old style 32-bit tables to make the x86 port
easier to maintain.

Renamed some verbosely named data structures, and fixed
incorrect number of entries for the page directory
pointer table.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-05 20:51:21 -08:00
Adithya Baglody
cb536111a9 Gcov: Added support for x86.
This patch adds all the required hooks needed in the kernel to
get the coverage reports from x86 SoCs.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-16 06:12:33 -05:00
Flavio Ceolin
6d50df212f arch: x86: Make statements evaluate boolean expressions
MISRA-C requires that the if statement has essentially Boolean type.

MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-01-07 08:52:07 -05:00
Anas Nashif
74a74bb6b8 power: rename api sys_soc -> sys_
sys_soc is just redundant, just call APIs with sys_*.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-12-28 16:16:28 -05:00
Anas Nashif
9151fbebf2 power: rename APIs and removing leading _
Remove leading underscore from PM APIs. _ was used for internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-12-28 16:16:28 -05:00
Flavio Ceolin
0ed0d164ef arch: x86: Use macro BIT to shift bits
The operation was shifiting bit using a signed constant in the left
operand. Use BIT macro to do it properly.

MISRA-C rule 12.2

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-11 14:37:10 -08:00
Flavio Ceolin
4b35dd2628 misra: Fixes for MISRA-C rule 8.2
In C90 was introduced function prototype, that allows argument types
to be checked against parameter types, though it is not necessary
specify names for the parameters. MISRA-C requires names for function
prototype parameters, it claims that names can provide useful
information regarding the function interface.

MISRA-C rule 8.2

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Flavio Ceolin
34b12d8f16 arch: x86: x86_mmu: Remove possible dead code
When __ASSERT is not enabled there is an attribution to the variable
total_partitions and it is never used.

MISRA-C rule 2.2

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Flavio Ceolin
1492df3415 x86: core: thread: Avoid clash with function identifier
There is a function called _thread_entry defined in
lib/thread_entry.c. Just changing name to fix MISRA-C violation.

MISRA-C rule 5.8

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Flavio Ceolin
c42dc9435d arch: x86: Make tag name unique
Renaming a variable to not clash with a struct name.

MISRA-C rule 5.7

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-12-07 09:06:34 -05:00
Andrew Boie
ab82ef4ca5 x86: always build the page fault handler
Previously, this was only built if CONFIG_EXCEPTION_DEBUG
was enabled, but CONFIG_USERSPACE needs it too for validating
strings sent in from user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-12-06 12:44:23 -08:00
Andrew Boie
d4363e4185 x86: print helpful message on FPU exception
A few people have tripped over this recently.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-12-06 12:44:23 -08:00
Patrik Flykt
494ef1cfe2 arch: Add 'U' to unsigned variable assignments
Add 'U' to a value when assigning it to an unsigned variable.
MISRA-C rule 7.2

Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
2018-12-04 22:51:56 -05:00
Flavio Ceolin
6e1f6e5d5d arch: x86: Make if statement evaluate a boolean expression
MISRA-C rule 14.4

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-30 08:05:11 -08:00
Flavio Ceolin
001ad8b6c2 arch: Making body of selection statement a compound statement
MISRA-C rule 15.6

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-11-29 14:21:29 -08:00
Andrew Boie
7bac15f2ff x86: add dynamic interrupt support
If dynamic interrupts are enabled, a set of trampoline stubs
are generated which transfer control to a common dynamic
interrupt handler function, which then looks up the proper
handler and parameter and then executes the interrupt.

Based on the prior x86 dynamic interrupt implementation which
was removed from the kernel some time ago, and adapted to
changes in the common interrupt handling code, build system,
and IDT generation tools.

An alternative approach could be to read the currently executing
vector out of the APIC, but this is a much slower operation.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-11-10 11:01:22 -05:00
Flavio Ceolin
61a1057ea5 kernel: Remove redundant type name
struct k_thread already has a pointer type k_tid_t, there is no need for
this definition to tcs.

Less symbols/names make the code cleaner and more readable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-10-31 19:43:47 -04:00
Andy Ross
9098a45c84 kernel: New timeslicing implementation
Instead of checking every time we hit the low-level context switch
path to see if the new thread has a "partner" with which it needs to
share time, just run the slice timer always and reset it from the
scheduler at the points where it has already decided a switch needs to
happen.  In TICKLESS_KERNEL situations, we pay the cost of extra timer
interrupts at ~10Hz or whatever, which is low (note also that this
kind of regular wakeup architecture is required on SMP anyway so the
scheduler can "notice" threads scheduled by other CPUs).  Advantages:

1. Much simpler logic.  Significantly smaller code.  No variance or
   dependence on tickless modes or timer driver (beyond setting a
   simple timeout).

2. No arch-specific assembly integration with _Swap() needed

3. Better performance on many workloads, as the accounting now happens
   at most once per timer interrupt (~5 Hz) and true rescheduling and
   not on every unrelated context switch and interrupt return.

4. It's SMP-safe.  The previous scheme kept the slice ticks as a
   global variable, which was an unnoticed bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-10-16 15:03:10 -04:00
Flavio Ceolin
78f27a81f5 kernel: Using the same paramenters names in a specific function
MISRA-C requires that all declarations of a specific function, or
object, use the same names and type qualifiers.

MISRA-C rule 8.3

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-28 07:58:19 +05:30
Mark Ruvald Pedersen
d67096da05 portability: Avoid void* arithmetics which is a GNU extension
Under GNU C, sizeof(void) = 1. This commit merely makes it explicit u8.

Pointer arithmetics over void types is:
 * A GNU C extension
 * Not supported by Clang
 * Illegal across all ISO C standards

See also: https://gcc.gnu.org/onlinedocs/gcc/Pointer-Arith.html

Signed-off-by: Mark Ruvald Pedersen <mped@oticon.com>
2018-09-28 07:57:28 +05:30
Krzysztof Chruscinski
27459a13e4 arch: Add LOG_PANIC to fault handlers
Added LOG_PANIC to fault handlers to ensure that log is flush and
logger processes messages in a blocking way in fault handler.

Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
2018-09-27 13:11:26 +05:30
Flavio Ceolin
5884c7f54b kernel: Explicitly ignoring _Swap return
Ignoring _Swap return where there is no treatment or nothing to do.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-09-14 16:55:37 -04:00
Sebastian Bøe
69d8c1c08c syscalls: Correct the type of _k_syscall_table
_k_syscall_table is an array of function pointers and is declared as
such in C sources, this makes it an STT_OBJECT[0] in the symbol
table. But when the same symbol is declared in assembly, it is
declared to be a function, which would make the symbol an STT_FUNC.

When linking with LTO this type inconsistency results in the warning:

real-ld: Warning: type of symbol `_k_syscall_table' changed from 2 to
1 in /tmp/cc84ofK0.ltrans8.ltrans.o

To fix this warning we declare the table with GDATA instead of GTEXT,
which will change the type from 'function' to 'object'.

[0]
https://docs.oracle.com/cd/E19455-01/816-0559/chapter6-79797/index.html

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-08-26 08:54:27 -07:00
Anas Nashif
483910ab4b systemview: add support natively using tracing hooks
Add needed hooks as a subsystem that can be enabled in any application.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif
a2248782a2 kernel: event_logger: remove kernel_event_logger
Move to more generic tracing hooks that can be implemented in different
ways and do not interfere with the kernel.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Anas Nashif
b6304e66f6 tracing: support generic tracing hooks
Define generic interface and hooks for tracing to replace
kernel_event_logger and existing tracing facilities with something more
common.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-08-21 05:45:47 -07:00
Adithya Baglody
a8b0b0d5e8 benchmarks: timing_info: Add hooks in the kernel for userspace.
Added sampling hooks in the kernel needed for userspace benchmarks.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-08-20 06:51:25 -07:00
Flavio Ceolin
0866d18d03 irq: Fix irq_lock api usage
irq_lock returns an unsigned int, though, several places was using
signed int. This commit fix this behaviour.

In order to avoid this error happens again, a coccinelle script was
added and can be used to check violations.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2018-08-16 19:47:41 -07:00
Sebastian Bøe
1186f5bb29 cmake: Deprecate the 2 symbols _SYSCALL_{LIMIT,BAD}
There exist two symbols that became equivalent when PR #9383 was
merged; _SYSCALL_LIMIT and K_SYSCALL_LIMIT. This patch deprecates the
redundant _SYSCALL_LIMIT symbol.

_SYSCALL_LIMIT was initally introduced because before PR #9383 was
merged K_SYSCALL_LIMIT was an enum, which couldn't be included into
assembly files. PR #9383 converted it into a define, which can be
included into assembly files, making _SYSCALL_LIMIT redundant.

Likewise for _SYSCALL_BAD.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-08-15 11:46:51 -07:00
Ulf Magnusson
8cf8db3a73 Kconfig: Use a short, consistent style for prompts
Consistently use

    config FOO
            bool/int/hex/string "Prompt text"

instead of

    config FOO
            bool/int/hex/string
            prompt "Prompt text"

(...and a bunch of other variations that e.g. swapped the order of the
type and the 'prompt', or put other properties between them).

The shorthand is fully equivalent to using 'prompt'. It saves lines and
avoids tricking people into thinking there is some semantic difference.

Most of the grunt work was done by a modified version of
https://unix.stackexchange.com/questions/26284/
how-can-i-use-sed-to-replace-a-multi-line-string/26290#26290, but some
of the rarer variations had to be converted manually.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2018-08-15 04:10:10 -07:00
Kumar Gala
4b22ba7e4b syscall: Move arch specific syscall code into its own header
Split out the arch specific syscall code to reduce include pollution
from other arch related headers.  For example on ARM its possible to get
errno.h included via SoC specific headers.  Which created an interesting
compile issue because of the order of syscall & errno/errno syscall
inclusion.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2018-08-02 22:06:49 -05:00
Andrew Boie
9d3cdb3568 x86: add z_arch_user_string_nlen
Uses fixup infrastructure to safely abort if we get a page
fault while measuring a string passed in from user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-07-31 07:47:15 -07:00
Shawn Mosley
573f32b6d2 userspace: compartmentalized app memory organization
Summary: revised attempt at addressing issue 6290.  The
following provides an alternative to using
CONFIG_APPLICATION_MEMORY by compartmentalizing data into
Memory Domains.  Dependent on MPU limitations, supports
compartmentalized Memory Domains for 1...N logical
applications.  This is considered an initial attempt at
designing flexible compartmentalized Memory Domains for
multiple logical applications and, with the provided python
script and edited CMakeLists.txt, provides support for power
of 2 aligned MPU architectures.

Overview: The current patch uses qualifiers to group data into
subsections.  The qualifier usage allows for dynamic subsection
creation and affords the developer a large amount of flexibility
in the grouping, naming, and size of the resulting partitions and
domains that are built on these subsections. By additional macro
calls, functions are created that help calculate the size,
address, and permissions for the subsections and enable the
developer to control application data in specified partitions and
memory domains.

Background: Initial attempts focused on creating a single
section in the linker script that then contained internally
grouped variables/data to allow MPU/MMU alignment and protection.
This did not provide additional functionality beyond
CONFIG_APPLICATION_MEMORY as we were unable to reliably group
data or determine their grouping via exported linker symbols.
Thus, the resulting decision was made to dynamically create
subsections using the current qualifier method. An attempt to
group the data by object file was tested, but found that this
broke applications such as ztest where two object files are
created: ztest and main.  This also creates an issue of grouping
the two object files together in the same memory domain while
also allowing for compartmenting other data among threads.

Because it is not possible to know a) the name of the partition
and thus the symbol in the linker, b) the size of all the data
in the subsection, nor c) the overall number of partitions
created by the developer, it was not feasible to align the
subsections at compile time without using dynamically generated
linker script for MPU architectures requiring power of 2
alignment.

In order to provide support for MPU architectures that require a
power of 2 alignment, a python script is run at build prior to
when linker_priv_stacks.cmd is generated.  This script scans the
built object files for all possible partitions and the names given
to them. It then generates a linker file (app_smem.ld) that is
included in the main linker.ld file.  This app_smem.ld allows the
compiler and linker to then create each subsection and align to
the next power of 2.

Usage:
 - Requires: app_memory/app_memdomain.h .
 - _app_dmem(id) marks a variable to be placed into a data
section for memory partition id.
 - _app_bmem(id) marks a variable to be placed into a bss
section for memory partition id.
 - These are seen in the linker.map as "data_smem_id" and
"data_smem_idb".
 - To create a k_mem_partition, call the macro
app_mem_partition(part0) where "part0" is the name then used to
refer to that partition. This macro only creates a function and
necessary data structures for the later "initialization".
 - To create a memory domain for the partition, the macro
app_mem_domain(dom0) is called where "dom0" is the name then
used for the memory domain.
 - To initialize the partition (effectively adding the partition
to a linked list), init_part_part0() is called. This is followed
by init_app_memory(), which walks all partitions in the linked
list and calculates the sizes for each partition.
 - Once the partition is initialized, the domain can be
initialized with init_domain_dom0(part0) which initializes the
domain with partition part0.
 - After the domain has been initialized, the current thread
can be added using add_thread_dom0(k_current_get()).
 - The code used in ztests ans kernel/init has been added under
a conditional #ifdef to isolate the code from other tests.
The userspace test CMakeLists.txt file has commands to insert
the CONFIG_APP_SHARED_MEM definition into the required build
targets.
  Example:
        /* create partition at top of file outside functions */
        app_mem_partition(part0);
        /* create domain */
        app_mem_domain(dom0);
        _app_dmem(dom0) int var1;
        _app_bmem(dom0) static volatile int var2;

        int main()
        {
                init_part_part0();
                init_app_memory();
                init_domain_dom0(part0);
                add_thread_dom0(k_current_get());
                ...
        }

 - If multiple partitions are being created, a variadic
preprocessor macro can be used as provided in
app_macro_support.h:

        FOR_EACH(app_mem_partition, part0, part1, part2);

or, for multiple domains, similarly:

        FOR_EACH(app_mem_domain, dom0, dom1);

Similarly, the init_part_* can also be used in the macro:

        FOR_EACH(init_part, part0, part1, part2);

Testing:
 - This has been successfully tested on qemu_x86 and the
ARM frdm_k64f board.  It compiles and builds power of 2
aligned subsections for the linker script on the 96b_carbon
boards.  These power of 2 alignments have been checked by
hand and are viewable in the zephyr.map file that is
produced during build. However, due to a shortage of
available MPU regions on the 96b_carbon board, we are unable
to test this.
 - When run on the 96b_carbon board, the test suite will
enter execution, but each individaul test will fail due to
an MPU FAULT.  This is expected as the required number of
MPU regions exceeds the number allowed due to the static
allocation. As the MPU driver does not detect this issue,
the fault occurs because the data being accessed has been
placed outside the active MPU region.
 - This now compiles successfully for the ARC boards
em_starterkit_em7d and em_starterkit_em7d_v22. However,
as we lack ARC hardware to run this build on, we are unable
to test this build.

Current known issues:
1) While the script and edited CMakeLists.txt creates the
ability to align to the next power of 2, this does not
address the shortage of available MPU regions on certain
devices (e.g. 96b_carbon).  In testing the APB and PPB
regions were commented out.
2) checkpatch.pl lists several issues regarding the
following:
a) Complex macros. The FOR_EACH macros as defined in
app_macro_support.h are listed as complex macros needing
parentheses.  Adding parentheses breaks their
functionality, and we have otherwise been unable to
resolve the reported error.
b) __aligned() preferred. The _app_dmem_pad() and
_app_bmem_pad() macros give warnings that __aligned()
is preferred. Prior iterations had this implementation,
which resulted in errors due to "complex macros".
c) Trailing semicolon. The macro init_part(name) has
a trailing semicolon as the semicolon is needed for the
inlined macro call that is generated when this macro
expands.

Update: updated to alternative CONFIG_APPLCATION_MEMORY.
Added config option CONFIG_APP_SHARED_MEM to enable a new section
app_smem to contain the shared memory component.  This commit
seperates the Kconfig definition from the definition used for the
conditional code.  The change is in response to changes in the
way the build system treats definitions.  The python script used
to generate a linker script for app_smem was also midified to
simplify the alignment directives.  A default linker script
app_smem.ld was added to remove the conditional includes dependency
on CONFIG_APP_SHARED_MEM.  By addining the default linker script
the prebuild stages link properly prior to the python script running

Signed-off-by: Joshua Domagalski <jedomag@tycho.nsa.gov>
Signed-off-by: Shawn Mosley <smmosle@tycho.nsa.gov>
2018-07-25 12:02:01 -07:00
Zide Chen
98775f34c3 kconfig: decouple realmode boot from CONFIG_JAIHOUSE
Add CONFIG_REALMODE item so that it's possible to configure other
x86 boards to boot from real mode.

Signed-off-by: Zide Chen <zide.chen@intel.com>
2018-07-24 15:37:09 -07:00
Anas Nashif
bebda565b5 clang: fix for x86 iamcu
Clang, build IAMCU with -miamcu.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-07-01 22:58:09 +02:00
Anas Nashif
72edc4e15f clang/llvm: add initial configuration file for clang
Add an LLVM backend and a clang toolchain variant to support building
with llvm coming with popular Linux distributions.

This has been tested with X86 boards:
- quark_d2000_crb
- quark_se_c1000_devboard/Arduino 101

Use:

export ZEPHYR_TOOLCHAIN_VARIANT=clang

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-07-01 22:58:09 +02:00
Ulf Magnusson
87ecbe7f48 arch: x86: Kconfig: Remove redundant 'default n' properties
Bool symbols implicitly default to 'n'.

A 'default n' can make sense e.g. in a Kconfig.defconfig file, if you
want to override a 'default y' on the base definition of the symbol. It
isn't used like that on any of these symbols though.

Signed-off-by: Ulf Magnusson <Ulf.Magnusson@nordicsemi.no>
2018-06-26 11:07:57 -05:00
Sebastian Bøe
a1e806bf44 gen_isr_tables: Delete the dead code accompanying .intList.num_isrs
intList has been populated with the number of isrs, aka interrupts,
but nothing has not been using this information so we drop it and
everything used to construct it.

Signed-off-by: Sebastian Bøe <sebastian.boe@nordicsemi.no>
2018-06-25 12:54:49 -07:00
Andrew Boie
2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Leandro Pereira
edd18c8f5a arch: x86: Better document that CR0.WP will also be set when CR0.PG is
Setting bit CR0.WP (bit 16) will inhibit supervisor threads from
writing to RO pages.  It's a necessary flag to be set, and the constant
name CR0_PAGING_ENABLE didn't reflect the fact that the 16th bit was
being set.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-26 19:09:33 -04:00
Andy Ross
3a0cb2d35d kernel: Remove legacy preemption checking
The metairq feature exposed the fact that all of our arch code (and a
few mistaken spots in the scheduler too) was trying to interpret
"preemptible" threads independently.

As of the scheduler rewrite, that logic is entirely within sched.c and
doing it externally is redundant.  And now that "cooperative" threads
can be preempted, it's wrong and produces test failures when used with
metairq threads.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-25 09:40:55 -07:00
Leandro Pereira
ecadd465a2 arch: x86: Allow disabling speculative store bypass
In order to mitigate against Spectre V4, add an option that will, at
boot time, verify if the CPU supports the SPEC_CTRL MSR; if so, it'll
attempt to disable the feature.

More information can be found in chapter 4 (Speculative Store Bypass
Mitigation) of the "Speculative Execution Side Channel Mitigations"
document, version 2, published by Intel: https://goo.gl/nocTcj

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-24 13:07:12 -04:00
Leandro Pereira
d4221f9a71 arch: x86: Rename MSR-handling functions to conform to convention
Rename _MsrRead() and _MsrWrite() to _x86_msr_read() and
_x86_msr_write() respectively.

Given that these functions are essentially implemented in assembly.
make them static inline.  They can be inlined by the compiler quite
well, most of the time incurring in space savings due to better
handling of the cobbled registers.

Also simplifies the inline assembly, using constraints instead of
moving registers ourselves.  Should shave off a few bytes from code
using these functions.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-23 14:38:22 -04:00
Adithya Baglody
5ab3960c75 arch: Cmake: Add __ZEPHYR_SUPERVISOR__ macro for arch files.
Normally a syscall would check the current privilege level and then
decide to go to _impl_<syscall> directly or go through a
_handler_<syscall>.
__ZEPHYR_SUPERVISOR__ is a compiler optimization flag which will
make all the system calls from the arch files directly link
to the _impl_<syscall>. Thereby reducing the overhead of checking the
privileges.

In the previous implementation all the source files would be compiled
by zephyr_source() rule. This means that zephyr_* is a catchall CMake
library for source files that can be built purely with the include
paths, defines, and other compiler flags that all zephyr source
files uses. This states that adding one extra compiler flag for only
one complete directory would fail.
This limitation can be overcome by using zephyr_libray* APIs. This
creates a library for the required directories and it also supports
directory level properties.
Hence we use zephyr_library* to create a new library with
macro _ZEPHYR_SUPERVISOR_ for the optimization.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Savinay Dharmappa
e524f0b846 dts: x86: derive RAM and ROM size from dts instead of Kconfig
patch removes Kconfig defines for RAM and ROM size in x86. Instead
these values are derived from dts.

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
2018-05-14 17:19:23 -04:00
Andrew Boie
42a2c96422 newlib: fix heap user mode access for MPU devices
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.

MPU devices that don't have these restrictions allocate
the heap as normal.

In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.

Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.

On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.

Fixes: #6814

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-10 15:09:02 -07:00
Leandro Pereira
16472cafcf arch: x86: Use retpolines in core assembly routines
In order to mitigate Spectre variant 2 (branch target injection), use
retpolines for indirect jumps and calls.

The newly-added hidden CONFIG_X86_NO_SPECTRE flag, which is disabled
by default, must be set by a x86 SoC if its CPU performs speculative
execution.  Most targets supported by Zephyr do not, so this is
set to "y" by default.

A new setting, CONFIG_RETPOLINE, has been added to the "Security
Options" sections, and that will be enabled by default if
CONFIG_X86_NO_SPECTRE is disabled.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-24 04:00:01 +05:30
Anas Nashif
993c350b92 cleanup: replace old jira numbers with GH issues
Replace all references to old JIRA issues (ZEP) with the corrosponding
Github issue ID.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-26 13:13:04 -04:00
Leandro Pereira
2f5659d4d2 arch: x86: Unwind the stack on fatal errors
Prints up to 8 stack frames, with the following format:
  RETURN_ADDR(ARGUMENTS)

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-16 14:12:15 -07:00
Andrew Boie
e00564d15d x86: fix logic for thread wrappers
If we enable CONFIG_DEBUG_INFO, then we need to fixup the stack
on thread entry so that the EFLAGS value in the EBP slot doesn't
confuse the debugger or any runtime stack unwinding code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-03-16 14:12:15 -07:00
Andrew Boie
c0771ea243 arch: x86: fix integer stub comments
The comment was obsolete; we simply do not allow use of the FPU or
vector math in ISRs. There is no desire to add such support, doing
this is properly offloaded to a worker thread.

Fixes #5283.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-03-06 17:16:03 -05:00
Andy Ross
9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Michael Scott
7a9b688526 x86: fix build warning
During compile of lwm2m_client using qemu_x86, the following build
warning was noticed:
zephyr/arch/x86/core/excstub.S:132:2: warning: "/*" within comment [-Wcomment]
  /*

In commit ff42bdd0a0 ("debug: remove option GDB_INFO"), the comment tag
was omitted.  Fix the comment end tag.

Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
2018-02-12 19:21:06 -05:00
Anas Nashif
11a9625eaf debug: remove DEBUG_INFO option
This feature is X86 only and is not used or being tested. It is legacy
feature and no one can prove it actually works. Remove it until we have
proper documentation and samples and multi architecture support.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-02-12 13:58:28 -08:00
Anas Nashif
ff42bdd0a0 debug: remove option GDB_INFO
This feature is X86 only and is not used or being tested. It is legacy
feature and no one can prove it actually works. Remove it until we have
proper documentation and samples and multi architecture support.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-02-12 13:58:28 -08:00
Anas Nashif
b8ea7c889d x86: remove HAS_DTS checking
All X86 boards are now DTS enabled.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-01-29 10:38:32 -06:00
Anas Nashif
9f6c7838e5 arch: fix typo defafult -> default
Simple typo fix.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-01-08 08:08:45 -05:00
Adithya Baglody
9cde20aefa kernel: mem_domain: Add to current thread should configure immediately.
when a current thread is added to a memory domain the pages/sections
must be configured immediately.
A problem occurs when we add a thread to current and then drop
down to usermode. In such a case memory domain will become active
the next time a swap occurs.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-12-21 11:52:27 -08:00
Adithya Baglody
13ac4d4264 kernel: mem_domain: Add an arch interface to configure memory domain
Add an architecure specfic code for the memory domain
configuration. This is needed to support a memory domain API
k_mem_domain_add_thread.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-12-21 11:52:27 -08:00
Anas Nashif
429c2a4d9d kconfig: fix help syntax and add spaces
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-13 17:43:28 -06:00
Anas Nashif
abbaac9189 cleanup: remove nanokernel/nano leftovers
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-05 09:44:23 -06:00
Adithya Baglody
808ad6101e x86: swap: save the scratch pad registers.
Save the required scratch pad register (in this case only edx)
before calling the C function.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-27 11:50:50 -05:00
Sebastian Bøe
0829ddfe9a kbuild: Removed KBuild
Signed-off-by: Sebastian Boe <sebastian.boe@nordicsemi.no>
2017-11-08 20:00:22 -05:00
Sebastian Bøe
12f8f76165 Introduce cmake-based rewrite of KBuild
Introducing CMake is an important step in a larger effort to make
Zephyr easy to use for application developers working on different
platforms with different development environment needs.

Simplified, this change retains Kconfig as-is, and replaces all
Makefiles with CMakeLists.txt. The DSL-like Make language that KBuild
offers is replaced by a set of CMake extentions. These extentions have
either provided simple one-to-one translations of KBuild features or
introduced new concepts that replace KBuild concepts.

This is a breaking change for existing test infrastructure and build
scripts that are maintained out-of-tree. But for FW itself, no porting
should be necessary.

For users that just want to continue their work with minimal
disruption the following should suffice:

Install CMake 3.8.2+

Port any out-of-tree Makefiles to CMake.

Learn the absolute minimum about the new command line interface:

$ cd samples/hello_world
$ mkdir build && cd build
$ cmake -DBOARD=nrf52_pca10040 ..

$ cd build
$ make

PR: zephyrproject-rtos#4692
docs: http://docs.zephyrproject.org/getting_started/getting_started.html

Signed-off-by: Sebastian Boe <sebastian.boe@nordicsemi.no>
2017-11-08 20:00:22 -05:00
Adithya Baglody
538fa7b37c x86: MMU: Configure page tables entries for memory domain in swap.
During swap the required page tables are configured. The outgoing
thread's memory domain pages are reset and the incoming thread's
memory domain is loaded. The pages are configured if userspace
is enabled and if memory domain has been initialized before
calling swap.

GH-3852

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Adithya Baglody
f7b0731ce4 x86: MMU: Memory domain implementation for x86
Added support for memory domain implementation.

GH-3852

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Andrew Boie
2a8684f60c x86: de-couple user mode and HW stack protection
This is intended for memory-constrained systems and will save
4K per thread, since we will no longer reserve room for or
activate a kernel stack guard page.

If CONFIG_USERSPACE is enabled, stack overflows will still be
caught in some situations:

1) User mode threads overflowing stack, since it crashes into the
kernel stack page
2) Supervisor mode threads overflowing stack, since the kernel
stack page is marked non-present for non-user threads

Stack overflows will not be caught:

1) When handling a system call
2) When the interrupt stack overflows

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-07 09:31:49 -08:00
Gustavo Lima Chaves
97a8716a4f x86: Jailhouse port, tested for UART (# 0, polling) and LOAPIC timer
This is an introductory port for Zephyr to be run as a Jailhouse
hypervisor[1]'s "inmate cell", on x86 64-bit CPUs (running on 32-bit
mode). This was tested with their "tiny-demo" inmate demo cell
configuration, which takes one of the CPUs of the QEMU-VM root cell
config, along with some RAM and serial controller access (it will even
do nice things like reserving some L3 cache for it via Intel CAT) and
Zephyr samples:

   - hello_world
   - philosophers
   - synchronization

The final binary receives an additional boot sequence preamble that
conforms to Jailhouse's expectations (starts at 0x0 in real mode). It
will put the processor in 32-bit protected mode and then proceed to
Zephyr's __start function.

Testing it is just a matter of:
  $ mmake -C samples/<sample_dir> BOARD=x86_jailhouse JAILHOUSE_QEMU_IMG_FILE=<path_to_image.qcow2> run
  $ sudo insmod <path to jailhouse.ko>
  $ sudo jailhouse enable <path to configs/qemu-x86.cell>
  $ sudo jailhouse cell create <path to configs/tiny-demo.cell>
  $ sudo mount -t 9p -o trans/virtio host /mnt
  $ sudo jailhouse cell load tiny-demo /mnt/zephyr.bin
  $ sudo jailhouse cell start tiny-demo
  $ sudo jailhouse cell destroy tiny-demo
  $ sudo jailhouse disable
  $ sudo rmmod jailhouse

For the hello_world demo case, one should then get QEMU's serial port
output similar to:

"""
Created cell "tiny-demo"
Page pool usage after cell creation: mem 275/1480, remap 65607/131072
Cell "tiny-demo" can be loaded
CPU 3 received SIPI, vector 100
Started cell "tiny-demo"
***** BOOTING ZEPHYR OS v1.9.0 - BUILD: Sep 12 2017 20:03:22 *****
Hello World! x86
"""

Note that the Jailhouse's root cell *has to be started in xAPIC
mode* (kernel command line argument 'nox2apic') in order for this to
work. x2APIC support and its reasoning will come on a separate commit.

As a reminder, the make run target introduced for x86_jailhouse board
involves a root cell image with Jailhouse in it, to be launched and then
partitioned (with >= 2 64-bit CPUs in it).

Inmate cell configs with no JAILHOUSE_CELL_PASSIVE_COMMREG flag
set (e.g. apic-demo one) would need extra code in Zephyr to deal with
cell shutdown command responses from the hypervisor.

You may want to fine tune CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC for your
specific CPU—there is no detection from Zephyr with regard to that.

Other config differences from pristine QEMU defaults worth of mention
are:

   - there is no HPET when running as Jailhouse guest. We use the LOAPIC
     timer, instead
   - there is no PIC_DISABLE, because there is no 8259A PIC when running
     as a Jailhouse guest
   - XIP makes no sense also when running as Jailhouse guest, and both
     PHYS_RAM_ADDR/PHYS_LOAD_ADD are set to zero, what tiny-demo cell
     config is set to

This opens up new possibilities for Zephyr, so that usages beyond just
MCUs come to the table. I see special demand coming from
functional-safety related use cases on industry, automotive, etc.

[1] https://github.com/siemens/jailhouse

Reference to Jailhouse's booting preamble code:

Origin: Jailhouse
License: BSD 2-Clause
URL: https://github.com/siemens/jailhouse
commit: 607251b44397666a3cbbf859d784dccf20aba016
Purpose: Dual-licensing of inmate lib code
Maintained-by: Zephyr

Signed-off-by: Gustavo Lima Chaves <gustavo.lima.chaves@intel.com>
2017-11-07 08:58:49 -05:00
Anas Nashif
780324b8ed cleanup: rename fiber/task -> thread
We still have many places talking about tasks and threads, replace those
with thread terminology.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-10-30 18:41:15 -04:00
Andrew Boie
3f508e911e x86: fix CONFIG_DEBUG_INFO build error
This doesn't have any register operands and needs a size suffix.
Fixes: #4480

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-24 12:50:13 -07:00
Adithya Baglody
76edf8a681 x86: MMU: Enable boot time PAE page tables.
In PAE boot tables the __mmu_tables_start points to page directory
pointer (PDPT). Enable the PAE by updating the CR4.PAE and
IA32_EFER.NXE bits.

JIRA:ZEP-2511

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-23 10:13:07 -07:00
Adithya Baglody
725de70d86 x86: MMU: Create PAE page structures and unions.
Created structures and unions needed to enable the software to
access these tables.
Also updated the helper macros to ease the usage of the MMU page
tables.

JIRA: ZEP-2511

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-23 10:13:07 -07:00
Andrew Boie
d7631ec7e4 Revert "x86: MMU: Memory domain implementation for x86"
This reverts commit d0f6ce2d98.
2017-10-20 15:02:59 -04:00
Andrew Boie
de777adf7b Revert "x86: MMU: Configure page tables entries for memory domain in swap."
This reverts commit a8b9353421.
2017-10-20 15:02:59 -04:00
Adithya Baglody
a8b9353421 x86: MMU: Configure page tables entries for memory domain in swap.
During swap the required page tables are configured. The outgoing
thread's memory domain pages are reset and the incoming thread's
memory domain is loaded. The pages are configured if userspace
is enabled and if memory domain has been initialized before
calling swap.

GH-3852

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-20 10:39:51 -07:00
Adithya Baglody
d0f6ce2d98 x86: MMU: Memory domain implementation for x86
Added support for memory domain implementation.

GH-3852

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-20 10:39:51 -07:00
Andrew Boie
c5c104f91e kernel: fix k_thread_stack_t definition
Currently this is defined as a k_thread_stack_t pointer.
However this isn't correct, stacks are defined as arrays. Extern
references to k_thread_stack_t doesn't work properly as the compiler
treats it as a pointer to the stack array and not the array itself.

Declaring as an unsized array of k_thread_stack_t doesn't work
well either. The least amount of confusion is to leave out the
pointer/array status completely, use pointers for function prototypes,
and define K_THREAD_STACK_EXTERN() to properly create an extern
reference.

The definitions for all functions and struct that use
k_thread_stack_t need to be updated, but code that uses them should
be unchanged.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-17 08:24:29 -07:00
Andrew Boie
094f2cb77b x86: fix crash in _x86_mmu_get_flags
Looking up the PTE flags was page faulting if the address wasn't
marked as present in the page directory, since there is no page table
for that directory entry.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-17 08:16:14 -07:00
Andrew Boie
73a5fe77f8 x86: fix stack overflow in double fault handler
At very low optimization levels, the call to
K_THREAD_STACK_BUFFER doesn't get inlined, overflowing the
tiny stack.

Replace with _ARCH_THREAD_STACK_BUFFER() which on x86 is
just a macro.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 10:53:48 -07:00
Andrew Boie
3e3a237930 x86: fix stack zeroing when dropping to user mode
For 'rep stosl' ECX isn't a size value, it's how many times to repeat
the 4-byte string copy operation.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-05 18:49:09 -04:00
Andrew Boie
13ca6fe284 syscalls: reorganize headers
- syscall.h now contains those APIs needed to support invoking calls
  from user code. Some stuff moved out of main kernel.h.
- syscall_handler.h now contains directives useful for implementing
  system call handler functions. This header is not pulled in by
  kernel.h and is intended to be used by C files implementing kernel
  system calls and driver subsystem APIs.
- syscall_list.h now contains the #defines for system call IDs. This
  list is expected to grow quite large so it is put in its own header.
  This is now an enumerated type instead of defines to make things
  easier as we introduce system calls over the new few months. In the
  fullness of time when we desire to have a fixed userspace/kernel ABI,
  this can always be converted to defines.

Some new code added:

- _SYSCALL_MEMORY() macro added to check memory regions passed up from
  userspace in handler functions
- _syscall_invoke{7...10}() inline functions declare for invoking system
  calls with more than 6 arguments. 10 was chosen as the limit as that
  corresponds to the largest arg list we currently have
  which is for k_thread_create()

Other changes

- auto-generated K_SYSCALL_DECLARE* macros documented
- _k_syscall_table in userspace.c is not a placeholder. There's no
  strong need to generate it and doing so would require the introduction
  of a third build phase.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-28 08:56:20 -07:00
Andrew Boie
1956f09590 kernel: allow up to 6 arguments for system calls
A quick look at "man syscall" shows that in Linux, all architectures
support at least 6 argument system calls, with a few supporting 7. We
can at least do 6 in Zephyr.

x86 port modified to use EBP register to carry the 6th system call
argument.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-20 09:18:59 -07:00
Andrew Boie
a23c245a9a userspace: flesh out internal syscall interface
* Instead of a common system call entry function, we instead create a
table mapping system call ids to handler skeleton functions which are
invoked directly by the architecture code which receives the system
call.

* system call handler prototype specified. All but the most trivial
system calls will implement one of these. They validate all the
arguments, including verifying kernel/device object pointers, ensuring
that the calling thread has appropriate access to any memory buffers
passed in, and performing other parameter checks that the base system
call implementation does not check, or only checks with __ASSERT().

It's only possible to install a system call implementation directly
inside this table if the implementation has a return value and requires
no validation of any of its arguments.

A sample handler implementation for k_mutex_unlock() might look like:

u32_t _syscall_k_mutex_unlock(u32_t mutex_arg, u32_t arg2, u32_t arg3,
                              u32_t arg4, u32_t arg5, void *ssf)
{
        struct k_mutex *mutex = (struct k_mutex *)mutex_arg;
        _SYSCALL_ARG1;

        _SYSCALL_IS_OBJ(mutex, K_OBJ_MUTEX, 0,  ssf);
        _SYSCALL_VERIFY(mutex->lock_count > 0, ssf);
        _SYSCALL_VERIFY(mutex->owner == _current, ssf);

        k_mutex_unlock(mutex);

        return 0;
}

* the x86 port modified to work with the system call table instead of
calling a common handler function. fixed an issue where registers being
changed could confuse the compiler has been fixed; all registers, even
ones used for parameters, must be preserved across the system call.

* a new arch API for producing a kernel oops when validating system call
arguments added. The debug information reported will be from the system
call site and not inside the handler function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-15 13:44:45 -07:00
Andrew Boie
424e993b41 x86: implement userspace APIs
- _arch_user_mode_enter() implemented
- _arch_is_user_context() implemented
- _new_thread() will honor K_USER option if passed in
- System call triggering macros implemented
- _thread_entry_wrapper moved and now looks for the next function to
call in EDI

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Anas Nashif
1e8afbfe5a cleanup: remove lots of references to unified kernel
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-09-12 12:37:11 -04:00
Andrew Boie
d81f9c1e4d x86: revise _x86_mmu_buffer_validate
- There's no point in building up "validity" (declared volatile for some
  strange reason), just exit with false return value if any of the page
  directory or page table checks don't come out as expected

- The function was returning the opposite value as its documentation
  (0 on success, -EPERM on failure). Documentation updated.

- This function will only be used to verify buffers from user-space.
  There's no need for a flags parameter, the only option that needs to
  be passed in is whether the buffer has write permissions or not.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 08:40:41 -07:00
Andrew Boie
3bb677d6eb x86: don't set FS/GS segment selectors
We shouldn't be imposing any policy here, we do not yet use these in
Zephyr. Zero these at boot and otherwise leave alone.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 08:40:08 -07:00
Andrew Boie
1e06ffc815 zephyr: use k_thread_entry_t everywhere
In various places, a private _thread_entry_t, or the full prototype
were being used. Be consistent and use the same typedef everywhere.

Signen-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-11 11:18:22 -07:00
Anas Nashif
939889a202 kconfig: remove unused config DEBUG_IRQS
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-09-11 09:42:35 -07:00
Anas Nashif
261f898e8f kconfig: remove exta menu for x86 core options
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-09-11 09:42:35 -07:00
Adithya Baglody
609ade891b x86: MMU: Updated MMU code to use the new macros.
Use of X86_MMU_GET_PTE to increase readability of the MMU code.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-09-07 17:13:06 -07:00
Andrew Boie
a34f4fb94f x86: add printk for protection faults
Most x86 exceptions that don't already have their own handlers
are fairly rare, but with the introduction of userspace
people will be seeing General Protection Faults much more
often. Report it as text so that users unfamiliar with x86
internals will know what is happening.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:35:27 -07:00
Andrew Boie
8eaff5d6d2 k_thread_abort(): assert if abort essential thread
Previously, this was only done if an essential thread self-exited,
and was a runtime check that generated a kernel panic.

Now if any thread has k_thread_abort() called on it, and that thread
is essential to the system operation, this check is made. It is now
an assertion.

_NANO_ERR_INVALID_TASK_EXIT checks and printouts removed since this
is now an assertion.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:35:16 -07:00
Andrew Boie
8eeb09765b x86: cleanup _new_thread()
Years of iterative development had made this function more complicated
than it needed to be. Fixed some errors in the documentation as well.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-07 16:33:50 -07:00
Youvedeep Singh
76b577e180 tests: benchmark: timing_info: Change API/variable Name.
The API/Variable names in timing_info looks very speicific to
platform (like systick etc), whereas these variabled are used
across platforms (nrf/arm/quark).
So this patch :-
1. changing API/Variable names to generic one.
2. Creating some of Macros whose implimentation is platform
depenent.

Jira: ZEP-2314

Signed-off-by: Youvedeep Singh <youvedeep.singh@intel.com>
2017-08-31 14:25:31 -04:00
Adithya Baglody
ab7b02ce67 x86: MMU: Bug in _x86_mmu_buffer_validate
The value of the PTE (starting_pte_num) was not
calulated correctly. If size of the buffer exceeded 4KB,
the buffer validation API was failing.

JIRA: ZEP-2489

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-08-09 07:06:22 -07:00
Andrew Boie
80e82e7205 x86: stack overflow improvements
As luck would have it, the TSS for the main IA task has
all the information we need, populate an exception stack
frame with it.

The double-fault handler just stashes data and makes the main
hardware thread runnable again, and processing of the
exception continues from there.

We check the first byte before the faulting ESP value to see
if the stack pointer had run up to a non-present page, a sign
that this is a stack overflow and not a double fault for
some other reason.

Stack overflows in kernel mode are now recoverable for non-
essential threads, with the caveat that we hope we weren't in
a critical section updating kernel data structures when it
happened.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-08-03 11:46:26 -04:00
Andrew Boie
25a8aef275 x86: enable MMU for application memory
Configuring the RAM/ROM regions will be the same for all
x86 targets as this is done with linker symbols.

Peripheral configuration left at the SOC level.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-08-03 11:46:26 -04:00
Andrew Boie
9ffaaae5ad x86: additional debug output for page faults
Page faults will additionally dump out some interesting
page directory and page table flags for the faulting
memory address.

Intended to help determine whether the page tables have been
configured incorrectly as we enable memory protection features.

This only happens if CONFIG_EXCEPTION_DEBUG is turned on.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-08-03 11:46:26 -04:00
Andrew Boie
507852a4ad kernel: introduce opaque data type for stacks
Historically, stacks were just character buffers and could be treated
as such if the user wanted to look inside the stack data, and also
declared as an array of the desired stack size.

This is no longer the case. Certain architectures will create a memory
region much larger to account for MPU/MMU guard pages. Unfortunately,
the kernel interfaces treat both the declared stack, and the valid
stack buffer within it as the same char * data type, even though these
absolutely cannot be used interchangeably.

We introduce an opaque k_thread_stack_t which gets instantiated by
K_THREAD_STACK_DECLARE(), this is no longer treated by the compiler
as a character pointer, even though it really is.

To access the real stack buffer within, the result of
K_THREAD_STACK_BUFFER() can be used, which will return a char * type.

This should catch a bunch of programming mistakes at build time:

- Declaring a character array outside of K_THREAD_STACK_DECLARE() and
  passing it to K_THREAD_CREATE
- Directly examining the stack created by K_THREAD_STACK_DECLARE()
  which is not actually the memory desired and may trigger a CPU
  exception

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-08-01 16:43:15 -07:00
Andrew Boie
d944950aaa x86: install guard page for interrupt stack
We need to know when the interrupt stack overflows as well as
thread stacks.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-26 05:57:45 -04:00
Andrew Boie
054d47b29c x86: set stack guard page non-writable
This will trigger a page fault if the guard area
is written to. Since the exception itself will try
to write to the memory, a double fault will be triggered
and we will do an IA task switch to the df_tss and panic.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Andrew Boie
0fab8a6dc5 x86: page-aligned stacks with guard page
Subsequent patches will set this guard page as unmapped,
triggering a page fault on access. If this is due to
stack overflow, a double fault will be triggered,
which we are now capable of handling with a switch to
a know good stack.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Andrew Boie
6101aa6220 x86: add API for modifying page tables
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Andrew Boie
bc666ae7f7 x86: implement improved double-fault handler
We now create a special IA hardware task for handling
double faults. This has a known good stack so that if
the kernel tries to push stack data onto an unmapped page,
we don't triple-fault and reset the system.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Andrew Boie
08c291306e x86: generate RAM-based GDT dynamically
We will need this for stack memory protection scenarios
where a writable GDT with Task State Segment descriptors
will be used. The addresses of the TSS segments cannot be
put in the GDT via preprocessor magic due to architecture
requirments that the address be split up into different
fields in the segment descriptor.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-25 11:32:36 -04:00
Andrew Boie
a717050140 qemu_x86: terminate emulator on fatal system error
This will cause sanitycheck runs to finish more quickly
instead of sitting there waiting on a timeout. We already
do this with the Xtensa simulator.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-07-22 09:46:26 -04:00
Adithya Baglody
079b17b312 x86: MMU: Validate user Buffer
A user space buffer must be validated before required operation
can proceed. This API will check the current MMU
configuration to determine if the buffer held by the user is valid.

Jira: ZEP-2326

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-07-19 08:06:44 -07:00
Andrew Boie
3d8aaf7099 x86: implement bss zero and data copy for application
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-29 07:46:58 -04:00
Savinay Dharmappa
ce1add260b dts: x86: Add dts support for x86
patch adds necessary files and does the modification to the existing
files to add device support for x86 based intel quark microcontroller

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
2017-06-22 10:23:39 -05:00
Anas Nashif
397d29db42 linker: move all linker headers to include/linker
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-06-18 09:24:04 -05:00
Adithya Baglody
be1cb961ad tests: benchmark: boot_time: Reading time stamps made arch agnostic
1. Changed _tsc_read() to k_cycles_get_32(). Thus reading the
time stamp will be agnostic of the architecutre used.
2. Changed the variable names from *_tsc to *_time_stamp.

JIRA: ZEP-1426

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-06-16 07:37:37 -05:00
Adithya Baglody
43dfd98469 kernel: x86: MMU: Enable MMU at boot time.
In crt0.S the MMU is initialized. It uses the statically build
page tables. Here 32-bit paging scheme is used, thereby each page
table entry maps to a 4KB page. The valid regions of the memory are
specified by SOC specific file(soc.c).

JIRA: ZEP-2099

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-06-13 11:36:54 -04:00
Adithya Baglody
9bbf5335b9 kernel: x86: MMU: Macros & Linker scripts for Boot time table creation
Macro is used to create a structure to specify the boot time
page table configuration. Needed by the gen_mmu.py script to generate
the actual page tables.

Linker script is needed for the following:
     1. To place the MMU page tables at 4KByte boundary.
     2. To keep the configuration structure created by
        the Macro(mentioned above).

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-06-13 11:36:54 -04:00
Andrew Boie
e3550a29ff stack_sentinel: hang system on failure
Stack sentinel doesn't prevent corruption, it just notices when
it happens. Any memory could be in a bad state and it's more
appropriate to take the entire system down rather than just kill
the thread.

Fatal testcase will still work since it installs its own
_SysFatalErrorHandler.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-08 13:49:36 -05:00
Andrew Boie
998f905445 arches: declare _SysFatalErrorHandler __weak
This function is intended to be easily overridable by applications.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-08 13:49:36 -05:00
Andrew Boie
ae1a75b82e stack_sentinel: change cooperative check
One of the stack sentinel policies was to check the sentinel
any time a cooperative context switch is done (i.e, _Swap is
called).

This was done by adding a hook to _check_stack_sentinel in
every arch's __swap function.

This way is cleaner as we just have the hook in one inline
function rather than implemented in several different assembly
dialects.

The check upon interrupt is now made unconditionally rather
than checking if we are calling __swap, since the check now
is only called on cooperative _Swap(). The interrupt is always
serviced first.

Issue: ZEP-2244
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-08 13:49:36 -05:00
Andrew Boie
3989de7e3b kernel: fix short time-slice reset
The kernel tracks time slice usage with the _time_slice_elapsed global.
Every time the timer interrupt goes off and the timer driver calls
_nano_sys_clock_tick_announce() with the elapsed time, this is added to
_time_slice_elapsed. If it exceeds the total time slice, the thread is
moved to the back of the queue for that priority level and
_time_slice_elapsed is reset to zero.

In a non-tickless kernel, this is the only time _time_slice_elapsed is
reset.  If a thread uses up a partial time slice, and then cooperatively
switches to another thread, the next thread will inherit the remaining
time slice, causing it not to be able to run as long as it ought to.

There does exist code to properly reset the elapsed count, but it was
only compiled in a tickless kernel. Now it is built any time
CONFIG_TIMESLICING is enabled.

Issue: ZEP-2107
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-06-02 14:47:01 -04:00
Andrew Boie
5dcb279df8 debug: add stack sentinel feature
This places a sentinel value at the lowest 4 bytes of a stack
memory region and checks it at various intervals, including when
servicing interrupts or context switching.

This is implemented on all arches except ARC, which supports stack
bounds checking directly in hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 15:14:41 -04:00
Andrew Boie
3d3d6a85df x86: remove hacks to include functions
None of this is currently necessary, the spurious interrupt
stubs and exception entry code is included in the binary just
fine. To make matters worse, some data referenced lives in the
.intList section which is completely stripped out of the binary.

If in the future we find certain essential functions are being
garbage collected when they should not be, the proper way to
mitigate this is with KEEP() directives in the linker script.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-13 14:07:09 -04:00
Andrew Boie
d26cf2dc33 kernel: add k_thread_create() API
Unline k_thread_spawn(), the struct k_thread can live anywhere and not
in the thread's stack region. This will be useful for memory protection
scenarios where private kernel structures for a thread are not
accessible by that thread, or we want to allow the thread to use all the
stack space we gave it.

This requires a change to the internal _new_thread() API as we need to
provide a separate pointer for the k_thread.

By default, we still create internal threads with the k_thread in stack
memory. Forthcoming patches will change this, but we first need to make
it easier to define k_thread memory of variable size depending on
whether we need to store coprocessor state or not.

Change-Id: I533bbcf317833ba67a771b356b6bbc6596bf60f5
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-05-11 20:24:22 -04:00
Adithya Baglody
d03b2496cd test: benchmarking: Timing metrics for the kernel
JIRA: ZEP-1822, ZEP-1823, ZEP-1825

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-05-03 08:46:30 -04:00
Ramesh Thomas
89ffd44dfb kernel: tickless: Add tickless kernel support
Adds event based scheduling logic to the kernel. Updates
management of timeouts, timers, idling etc. based on
time tracked at events rather than periodic ticks. Provides
interfaces for timers to announce and get next timer expiry
based on kernel scheduling decisions involving time slicing
of threads, timeouts and idling. Uses wall time units instead
of ticks in all scheduling activities.

The implementation involves changes in the following areas

1. Management of time in wall units like ms/us instead of ticks
The existing implementation already had an option to configure
number of ticks in a second. The new implementation builds on
top of that feature and provides option to set the size of the
scheduling granurality to mili seconds or micro seconds. This
allows most of the current implementation to be reused. Due to
this re-use and co-existence with tick based kernel, the names
of variables may contain the word "tick". However, in the
tickless kernel implementation, it represents the currently
configured time unit, which would be be mili seconds or
micro seconds. The APIs that take time as a parameter are not
impacted and they continue to pass time in mili seconds.

2. Timers would not be programmed in periodic mode
generating ticks. Instead they would be programmed in one
shot mode to generate events at the time the kernel scheduler
needs to gain control for its scheduling activities like
timers, timeouts, time slicing, idling etc.

3. The scheduler provides interfaces that the timer drivers
use to announce elapsed time and get the next time the scheduler
needs a timer event. It is possible that the scheduler may not
need another timer event, in which case the system would wait
for a non-timer event to wake it up if it is idling.

4. New APIs are defined to be implemented by timer drivers. Also
they need to handler timer events differently. These changes
have been done in the HPET timer driver. In future other timers
that support tickles kernel should implement these APIs as well.
These APIs are to re-program the timer, update and announce
elapsed time.

5. Philosopher and timer_api applications have been enabled to
test tickless kernel. Separate configuration files are created
which define the necessary CONFIG flags. Run these apps using
following command
make pristine && make BOARD=qemu_x86 CONF_FILE=prj_tickless.conf qemu

Jira: ZEP-339 ZEP-1946 ZEP-948
Change-Id: I7d950c31bf1ff929a9066fad42c2f0559a2e5983
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:28 +00:00
Ramesh Thomas
62eea121b3 kernel: tickless: Rename _Swap to allow creation of macro
Future tickless kernel patches would be inserting some
code before call to Swap. To enable this it will create
a mcro named as the current _Swap which would call first
the tickless kernel code and then call the real __swap()

Jira: ZEP-339
Change-Id: Id778bfcee4f88982c958fcf22d7f04deb4bd572f
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2017-04-27 13:46:26 +00:00
Andrew Boie
7827b7bf4a x86: exception-assisted panic/oops support
We reserve a specific vector in the IDT to trigger when we want to
enter a fatal exception state from software.

Disabled for drivers/build_all tests as we were up to the ROM limit
on Quark D2000.

Issue: ZEP-843
Change-Id: I4de7f025fba0691d07bcc3b3f0925973834496a0
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-22 10:31:49 -04:00
Andrew Boie
cdb94d6425 kernel: add k_panic() and k_oops() APIs
Unlike assertions, these APIs are active at all times. The kernel will
treat these errors in the same way as fatal CPU exceptions. Ultimately,
the policy of what to do with these errors is implemented in
_SysFatalErrorHandler.

If the archtecture supports it, a real CPU exception can be triggered
which will provide a complete register dump and PC value when the
problem occurs. This will provide more helpful information than a fake
exception stack frame (_default_esf) passed to the arch-specific exception
handling code.

Issue: ZEP-843
Change-Id: I8f136905c05bb84772e1c5ed53b8e920d24eb6fd
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-22 10:31:49 -04:00
Kumar Gala
96ee45df8d kernel: refactor thread_monitor_init into common code
We do the same thing on all arch's right now for thread_monitor_init so
lets put it in a common place.  This also should fix an issue on xtensa
when thread monitor can be enabled (reference to _nanokernel.threads).

Change-Id: If2f26c1578aa1f18565a530de4880ae7bd5a0da2
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:42 +00:00
Kumar Gala
b8823c4efd kernel: Refactor common _new_thread init code
We do a bit of the same stuff on all the arch's to setup a new thread.
So lets put that code in a common place so we unify it for everyone and
reduce some duplicated code.

Change-Id: Ic04121bfd6846aece16aa7ffd4382bdcdb6136e3
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:42 +00:00
Kumar Gala
5742a508a2 kernel: cleanup use of naked unsigned in _new_thread
There are a few places that we used an naked unsigned type, lets be
explicit and make it 'unsigned int'.

Change-Id: I33fcbdec4a6a1c0b1a2defb9a5844d282d02d80e
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 20:34:41 +00:00
Kumar Gala
cc334c7273 Convert remaining code to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  This handles the remaining includes and kernel, plus
touching up various points that we skipped because of include
dependancies.  We also convert the PRI printf formatters in the arch
code over to normal formatters.

Jira: ZEP-2051

Change-Id: Iecbb12601a3ee4ea936fd7ddea37788a645b08b0
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 11:38:23 -05:00
Kumar Gala
bf53ebf2c8 arch: convert to using newly introduced integer sized types
Convert code to use u{8,16,32,64}_t and s{8,16,32,64}_t instead of C99
integer types.  There are few places we dont convert over to the new
types because of compatiability with ext/HALs or for ease of transition
at this point.  Fixup a few of the PRI formatters so we build with newlib.

Jira: ZEP-2051

Change-Id: I7d2d3697cad04f20aaa8f6e77228f502cd9c8286
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-04-21 12:08:12 +00:00
Anas Nashif
8df439b40b kernel: rename nanoArchInit->kernel_arch_init
Change-Id: I094665e583f506cc71185cb6b8630046b2d4b2f8
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-19 10:59:35 -05:00
Anas Nashif
d7bc60f096 kernel: remove remaining microkernel references
Change-Id: Ie648dbaaf714316c21395bd43e555618013dbd19
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-04-10 20:21:05 +00:00
Andrew Boie
bf902e6b95 x86: add a more informative page fault handler
This is only built in if CONFIG_EXCEPTION_DEBUG is turned on.

Change-Id: I91f0601e344919f3481f7f5e78cb98c6784d1ec8
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-04-06 21:18:28 +00:00
Andrew Boie
e01dd87377 x86: implement direct interrupts
Change-Id: Icac461a361dde969f023e7aa11f0605a97a3c009
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-02-03 18:18:30 +00:00
Benjamin Walsh
ee659ae1a1 build: add _ASMLANGUAGE to all asm files
This avoids asm files from having to explicitly define the _ASMLANGUAGE
symbol themselves.

Change-Id: I71f5a169f75d7443a58a0365a41c55b20dae3029
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:51 +00:00
Benjamin Walsh
ed240f2796 kernel/arch: streamline thread user options
The K_<thread option> flags/options avaialble to users were hidden in
the kernel private header files: move them to include/kernel.h to
publicize them.

Also, to avoid any future confusion, rename the k_thread.execution_flags
field to user_options.

Change-Id: I65a6fd5e9e78d4ccf783f3304b607a1e6956aeac
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:50 +00:00
Benjamin Walsh
4b65502448 kernel/x86: move INT_ACTIVE/EXC_ACTIVE to thread_state
They are internal states, not user-facing.

Also prepend an underscore since they are kernel internal symbols.

Change-Id: I53740e0d04a796ba1ccc409b5809438cdb189332
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:50 +00:00
Benjamin Walsh
a8978aba8f kernel: rename thread states symbols
They are not part of the API, so rename from K_<state> to
_THREAD_<state>.

Change-Id: Iaebb7d3083b80b9769bee5616e0f96ed2abc5c56
Signed-off-by: Benjamin Walsh <walsh.benj@gmail.com>
2017-01-24 13:34:49 +00:00
David B. Kinder
ac74d8b652 license: Replace Apache boilerplate with SPDX tag
Replace the existing Apache 2.0 boilerplate header with an SPDX tag
throughout the zephyr code tree. This patch was generated via a
script run over the master branch.

Also updated doc/porting/application.rst that had a dependency on
line numbers in a literal include.

Manually updated subsys/logging/sys_log.c that had a malformed
header in the original file.  Also cleanup several cases that already
had a SPDX tag and we either got a duplicate or missed updating.

Jira: ZEP-1457

Change-Id: I6131a1d4ee0e58f5b938300c2d2fc77d2e69572c
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2017-01-19 03:50:58 +00:00
Benjamin Walsh
168695c7ef kernel/arch: inspect prio/sched_locked together for preemptibility
These two fields in the thread structure control the preemptibility of a
thread.

sched_locked is decremented when the scheduler gets locked, which means
that the scheduler is locked for values 0xff to 0x01, since it can be
locked recursively. A thread is coop if its priority is negative, thus
if the prio field value is 0x80 to 0xff when looked at as an unsigned
value.

By putting them end-to-end, this means that a thread is non-preemptible
if the bundled value is greater than or equal to 0x0080. This is the
only thing the interrupt exit code has to check to decide to try a
reschedule or not.

Change-Id: I902d36c14859d0d7a951a6aa1bea164613821aca
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:25 +00:00
Benjamin Walsh
f955476559 kernel/arch: optimize memory use of some thread fields
Some thread fields were 32-bit wide, when they are not even close to
using that full range of values. They are instead changed to 8-bit fields.

- prio can fit in one byte, limiting the priorities range to -128 to 127

- recursive scheduler locking can be limited to 255; a rollover results
  most probably from a logic error

- flags are split into execution flags and thread states; 8 bits is
  enough for each of them currently, with at worst two states and four
  flags to spare (on x86, on other archs, there are six flags to spare)

Doing this saves 8 bytes per stack. It also sets up an incoming
enhancement when checking if the current thread is preemptible on
interrupt exit.

Change-Id: Ieb5321a5b99f99173b0605dd4a193c3bc7ddabf4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-09 20:52:24 +00:00
Benjamin Walsh
e6a69cae54 kernel/arch: reverse polarity on sched_locked
This will allow for an enhancement when checking if the thread is
preemptible when exiting an interrupt.

Change-Id: If93ccd1916eacb5e02a4d15b259fb74f9800d6f4
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2017-01-06 17:32:24 +00:00
Anas Nashif
fad7e2dd8d logging: move event_logger to subsys/logging
Jira: ZEP-1337
Change-Id: If1690e19a882cf53caaa3418ccabeb49c783f63d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-25 14:34:43 -05:00
Anas Nashif
c1347b4730 kernel: replace all remaining nanokernel occurances
replace include <nanokernel.h> with <kernel.h> everywhere and also fix
any remaining mentions of nanokernel.

Keep the legacy samples/tests as is.

Change-Id: Iac48447bd191e83f21a719c69dc26233216d08dc
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-25 14:34:43 -05:00
Benjamin Walsh
bfa5653e9a arch: remove instances of fiberRtnValueSet()
Obsolete, replaced by _set_thread_return_value().

Change-Id: I23e9cfc07e43542f0965817edc3552d456fd2ef3
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-21 19:50:08 +00:00
Anas Nashif
87133d5def debug: gdb: move to new kernel APIs
Change-Id: Ifed1fe7c60fa150ee3ef4fefabafeb95312bf8bc
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif
d687a95611 kernel: move kernel code to kernel/ directly
Also remove mentions of unified kernel in various places in the kernel,
samples and documentation.

Change-Id: Ice43bc73badbe7e14bae40fd6f2a302f6528a77d
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 14:59:35 -05:00
Anas Nashif
0d775bcd9c x86: remove obsolete comment about tasks/fibers
Change-Id: Iff911329f5c981d0d47880924e8a4d52478423fd
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:42 +00:00
Anas Nashif
cb888e6805 kernel: remove nano/micro wording and usage
Also remove some old cflags referencing directories that do not exist
anymore.
Also replace references to legacy APIs in doxygen documentation of
various functions.

Change-Id: I8fce3d1fe0f4defc44e6eb0ae09a4863e33a39db
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-12-19 19:58:03 +00:00
Benjamin Walsh
48db0b3443 arch/all: simpler _SysFatalErrorHandler()
- does not pull in printk(), for potential footprint gain
- does not pull in k_thread_abort(), for single-threaded systems

Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
Change-Id: Ibc6a198b81a6cd73117d1e85aa05b92a4501a34d
2016-12-15 16:17:39 -05:00
Benjamin Walsh
8e4a534ea1 kernel: enable and optimize coop-only configurations
Some kernel operations, like scheduler locking can be optmized out,
since coop threads lock the scheduler by their very nature. Also, the
interrupt exit path for all architecture does not have to do any
rescheduling, again by the nature of non-preemptible threads.

Change-Id: I270e926df3ce46e11d77270330f2f4b463971763
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh
c3a2bbba16 kernel: add k_cpu_idle/k_cpu_atomic_idle()
nano_cpu_idle/nano_cpu_atomic_idle were not ported to the unified
kernel, and only the old APIs were available. There was no real impact
since, in the unified kernel, only the idle thread should really be
doing power management. However, with a single-threaded kernel, these
functions can be useful again.

The kernel internals now make use of these APIs instead of the legacy
ones.

Change-Id: Ie8a6396ba378d3ddda27b8dd32fa4711bf53eb36
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 16:17:38 -05:00
Benjamin Walsh
88b3691415 kernel/arch: enhance the "ready thread" cache
The way the ready thread cache was implemented caused it to not always
be "hot", i.e. there could be some misses, which happened when the
cached thread was taken out of the ready queue. When that happened, it
was not replaced immediately, since doing so could mean that the
replacement might not run because the flow could be interrupted and
another thread could take its place. This was the more conservative
approach that insured that moving a thread to the cache would never be
wasted.

However, this caused two problems:

1. The cache could not be refilled until another thread context-switched
in, since there was no thread in the cache to compare priorities
against.

2. Interrupt exit code would always have to call into C to find what
thread to run when the current thread was not coop and did not have the
scheduler locked. Furthermore, it was possible for this code path to
encounter a cold cache and then it had to find out what thread to run
the long way.

To fix this, filling the cache is now more aggressive, i.e. the next
thread to put in the cache is found even in the case the current cached
thread is context-switched out. This ensures the interrupt exit code is
much faster on the slow path. In addition, since finding the next thread
to run is now always "get it from the cache", which is a simple fetch
from memory (_kernel.ready_q.cache), there is no need to call the more
complex C code.

On the ARM FRDM K64F board, this improvement is seen:

Before:

1- Measure time to switch from ISR back to interrupted task

   switching time is 215 tcs = 1791 nsec

2- Measure time from ISR to executing a different task (rescheduled)

   switch time is 315 tcs = 2625 nsec

After:

1- Measure time to switch from ISR back to interrupted task

   switching time is 130 tcs = 1083 nsec

2- Measure time from ISR to executing a different task (rescheduled)

   switch time is 225 tcs = 1875 nsec

These are the most dramatic improvements, but most of the numbers
generated by the latency_measure test are improved.

Fixes ZEP-1401.

Change-Id: I2eaac147048b1ec71a93bd0a285e743a39533973
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-12-15 15:50:02 -05:00
Andrew Boie
452fd7a5c2 x86: don't set segment registers if we don't set GDT
We have no idea what's in the GDT if we don't set it ourself.

Change-Id: I3c2e406370e3ea149252c423d66c97aab95bee17
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-29 14:51:57 -08:00
Benjamin Walsh
b2974a666d kernel/arch: move common thread.flags definitions to common file
Also remove NO_METRIC, which is not referenced anywhere anymore.

Change-Id: Ieaedf075af070a13aa3d975fee9b6b332203bfec
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-26 14:04:18 +00:00
Benjamin Walsh
069fd3624e kernel: streamline initialization of _thread_base and timeouts
Move _thread_base initialization to _init_thread_base(), remove mention
of "nano" in timeouts init and move timeout init to _init_thread_base().
Initialize all base fields via the _init_thread_base in semaphore groups
code.

Change-Id: I05b70b06261f4776bda6d67f358190428d4a954a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-23 00:27:42 +00:00
Benjamin Walsh
8fcc7f69da kernel/arch: remove unused uk_task_ptr parameter from _new_thread()
Artifact from microkernel, for handling multiple pending tasks on
nanokernel objects.

Change-Id: I3c2959ea2b87f568736384e6534ce8e275f1098f
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-23 00:23:57 +00:00
Andrew Boie
2ffa516d89 x86: set accessed bit in ROM-based GDT
Previous configuration was backwards. From the Intel manual:

"If the segment descriptors in the GDT or an LDT are placed in ROM,
the processor can enter an indefinite loop if software or the
processor attempts to update (write to) the ROM-based segment
descriptors. To prevent this problem, set the accessed bits
for all segment descriptors placed in a ROM. Also, remove
operating-system or executive code that attempts to modify
segment descriptors located in ROM."

Only by some miracle has this not been causing problems.

Change-Id: I0bb915962a1069876d2486473760112102feae7b
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-19 00:57:04 +00:00
Benjamin Walsh
669360d5ec kernel: fix thread prio and stack size types in some APIs
Prio should be an int, since values are small integers, not a fixed-size
int32_t. It aligns with the prio parameters of the other APIs.

Stack size should be size_t.

Change-Id: Id29751b86c4ad7a7c2a7ffe446c2a96ae83c77bf
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-18 23:08:46 +00:00
Inaky Perez-Gonzalez
11bd718733 fatal error handlers: report which thread croaked
When a thread dies, at least print the pointer to it, so we can debug
better.

Change-Id: Ief6bbc0c221e2d5271c240a4b73df16413aa5e22
Signed-off-by: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
2016-11-17 14:36:50 +00:00
Allan Stephens
c98da84e69 doc: Various corrections to doxygen info for Kernel APIs
Most kernel APIs are now ready for inclusion in the API guide.
The APIs largely follow a standard template to provide users
of the API guide with a consistent look-and-feel.

Change-Id: Ib682c31f912e19f5f6d8545d74c5f675b1741058
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-11-16 21:43:16 +00:00
Benjamin Walsh
f6ca7de09c kernel/arch: consolidate tTCS and TNANO definitions
There was a lot of duplication between architectures for the definition
of threads and the "nanokernel" guts. These have been consolidated.

Now, a common file kernel/unified/include/kernel_structs.h holds the
common definitions. Architectures provide two files to complement it:
kernel_arch_data.h and kernel_arch_func.h. The first one contains at
least the struct _thread_arch and struct _kernel_arch data structures,
as well as the struct _callee_saved and struct _caller_saved register
layouts. The second file contains anything that needs what is provided
by the common stuff in kernel_structs.h. Those two files are only meant
to be included in kernel_structs.h in very specific locations.

The thread data structure has been separated into three major parts:
common struct _thread_base and struct k_thread, and arch-specific struct
_thread_arch. The first and third ones are included in the second.

The struct s_NANO data structure has been split into two: common struct
_kernel and arch-specific struct _kernel_arch. The latter is included in
the former.

Offsets files have also changed: nano_offsets.h has been renamed
kernel_offsets.h and is still included by the arch-specific offsets.c.
Also, since the thread and kernel data structures are now made of
sub-structures, offsets have to be added to make up the full offset.
Some of these additions have been consolidated in shorter symbols,
available from kernel/unified/include/offsets_short.h, which includes an
arch-specific offsets_arch_short.h. Most of the code include
offsets_short.h now instead of offsets.h.

Change-Id: I084645cb7e6db8db69aeaaf162963fe157045d5a
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-12 07:04:52 -05:00
Ramesh Thomas
a3dc53f2a6 power_mgmt: Do not notify deep sleep if bootloader does context restore
Some bootloaders have power management support to restoer context
upon resume from deep sleep. In such cases, the OS startup code
should call the notification hook. Create Kconfig flags to configure
this option.

Jira: 1257
Change-Id: I9f40c5fa077c2f17dc8e9f11604c3ed17e549ed5
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2016-11-11 20:40:53 +00:00
Ramesh Thomas
c0cd7acf34 power_mgmt: Simplify _sys_soc_resume notification
_sys_soc_resume hook is over loaded to handle to different
scenarios. It is primarily called to notify exit of kernel idling
after PM operations. It is also used to notify exit from deep sleep.
This is very confusing and also makes the implementation of the
hook function very difficult because of very different conditions
involved in the 2 different use cases. Further, users may not require
either or both use cases depending of their custom boot flow and
power state handling. To simplify, create a separate hook for the
purpose of deep sleep exit notification. Use the existing one to
only notify kernel idling exit after PM operations.

Jira: ZEP-1256
Change-Id: I96350199a0fd37f16590c8ee5302a94a3d71b8ba
Signed-off-by: Ramesh Thomas <ramesh.thomas@intel.com>
2016-11-11 20:40:52 +00:00
Anas Nashif
7cac3b9625 arch: arc: arm: sys_thread_self_get -> k_current_get
Change-Id: Iaa01b0d8733d76888524cfd258bacbd9c11142de
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-11-10 18:52:51 +00:00
Allan Stephens
bce8fbb61e kernel: Clean up of x86 floating point code
Updates x86 floating point support to reflect changes that have
been made in recent months.

* Many, many, many cosmetic changes (mostly revisions to comments).

* Elimination of unnecessary function aliases that were needed
  to support the task and fiber versions of certain APIs.

* Elimination of run-time code to enable a thread's "FP regs"
  option bit if the "SSE regs" option bit was set. The kernel
  now recognizes that the thread is using the FPU as long as
  either option bit is set. (If the thread has both option bits
  enabled this is the same as if only the "SSE regs" bit is set.)

Change-Id: Ic12abc54b6fa78921749b546d8debf23e7ad232d
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-11-09 23:51:30 +00:00
Andrew Boie
56f561e15e arches: use new kernel APIs
Change-Id: I4b6f5264d5295ebf4278991a1f4e2141bef6602f
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-09 20:49:40 +00:00
Andrew Boie
0b474eef9c kernel: deprecate old init levels
PRIMARY, SECONDARY, NANOKERNEL, MICROKERNEL init levels are now
deprecated.

New init levels introduced: PRE_KERNEL_1, PRE_KERNEL_2, POST_KERNEL
to replace them.

Most existing code has instances of PRIMARY replaced with PRE_KERNEL_1,
SECONDARY with POST_KERNEL as SECONDARY has had a longstanding bug
where the documentation specified SECONDARY ran before the kernel started
up, but actually ran afterwards.

Change-Id: I771bc634e9caf7f17dbf214a270bc9967eed7d32
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-09 17:59:44 +00:00
Benjamin Walsh
3cc2ba9f9c kernel: add __ASSERT() for thread priorities
Verify the thread priorities are within the bounds when starting a new
thread and when changing the priority of a thread.

Change-Id: I007b3b249e4b80235b6439cbee44cad2f31973bb
Signed-off-by: Benjamin Walsh <benjamin.walsh@windriver.com>
2016-11-08 20:27:31 -05:00
Andrew Boie
ee95dd22a4 x86: remove CONFIG_NANOKERNEL references
Change-Id: I8c6ca9189dd09133162816675e33332d6e5a34b3
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2016-11-08 22:02:45 +00:00
Allan Stephens
f48f263665 kernel: Rename USE_FP and USE_SSE symbols
Symbols now use the K_ prefix which is now standard for the
unified kernel. Legacy support for these symbols is retained
to allow existing applications to build successfully.

Change-Id: I3ff12c96f729b535eecc940502892cbaa52526b6
Signed-off-by: Allan Stephens <allan.stephens@windriver.com>
2016-11-07 18:52:31 +00:00
Anas Nashif
12ffc58d4b benchmarks: rename _NanoTscRead -> _tsc_read
Change-Id: Id5687f79ac13136f14a14d250e149436a0173f04
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2016-11-07 15:39:15 +00:00