Commit graph

1036 commits

Author SHA1 Message Date
Andrew Boie
2dd91eca0e kernel: move thread monitor init to common code
The original implementation of CONFIG_THREAD_MONITOR would
try to leverage a thread's initial stack layout to provide
the entry function with arguments for any given thread.

This is problematic:

- Some arches do not have a initial stack layout suitable for
this
- Some arches never enabled this at all (riscv32, nios2)
- Some arches did not enable this properly
- Dropping to user mode would erase or provide incorrect
information.

Just spend a few extra bytes to store this stuff directly
in the k_thread struct and get rid of all the arch-specific
code for this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-06-06 14:26:45 -04:00
Leandro Pereira
edd18c8f5a arch: x86: Better document that CR0.WP will also be set when CR0.PG is
Setting bit CR0.WP (bit 16) will inhibit supervisor threads from
writing to RO pages.  It's a necessary flag to be set, and the constant
name CR0_PAGING_ENABLE didn't reflect the fact that the 16th bit was
being set.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-26 19:09:33 -04:00
Andy Ross
3a0cb2d35d kernel: Remove legacy preemption checking
The metairq feature exposed the fact that all of our arch code (and a
few mistaken spots in the scheduler too) was trying to interpret
"preemptible" threads independently.

As of the scheduler rewrite, that logic is entirely within sched.c and
doing it externally is redundant.  And now that "cooperative" threads
can be preempted, it's wrong and produces test failures when used with
metairq threads.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-05-25 09:40:55 -07:00
Leandro Pereira
ecadd465a2 arch: x86: Allow disabling speculative store bypass
In order to mitigate against Spectre V4, add an option that will, at
boot time, verify if the CPU supports the SPEC_CTRL MSR; if so, it'll
attempt to disable the feature.

More information can be found in chapter 4 (Speculative Store Bypass
Mitigation) of the "Speculative Execution Side Channel Mitigations"
document, version 2, published by Intel: https://goo.gl/nocTcj

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-24 13:07:12 -04:00
Leandro Pereira
d4221f9a71 arch: x86: Rename MSR-handling functions to conform to convention
Rename _MsrRead() and _MsrWrite() to _x86_msr_read() and
_x86_msr_write() respectively.

Given that these functions are essentially implemented in assembly.
make them static inline.  They can be inlined by the compiler quite
well, most of the time incurring in space savings due to better
handling of the cobbled registers.

Also simplifies the inline assembly, using constraints instead of
moving registers ourselves.  Should shave off a few bytes from code
using these functions.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-05-23 14:38:22 -04:00
Adithya Baglody
5ab3960c75 arch: Cmake: Add __ZEPHYR_SUPERVISOR__ macro for arch files.
Normally a syscall would check the current privilege level and then
decide to go to _impl_<syscall> directly or go through a
_handler_<syscall>.
__ZEPHYR_SUPERVISOR__ is a compiler optimization flag which will
make all the system calls from the arch files directly link
to the _impl_<syscall>. Thereby reducing the overhead of checking the
privileges.

In the previous implementation all the source files would be compiled
by zephyr_source() rule. This means that zephyr_* is a catchall CMake
library for source files that can be built purely with the include
paths, defines, and other compiler flags that all zephyr source
files uses. This states that adding one extra compiler flag for only
one complete directory would fail.
This limitation can be overcome by using zephyr_libray* APIs. This
creates a library for the required directories and it also supports
directory level properties.
Hence we use zephyr_library* to create a new library with
macro _ZEPHYR_SUPERVISOR_ for the optimization.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2018-05-15 17:48:18 +03:00
Savinay Dharmappa
e524f0b846 dts: x86: derive RAM and ROM size from dts instead of Kconfig
patch removes Kconfig defines for RAM and ROM size in x86. Instead
these values are derived from dts.

Signed-off-by: Savinay Dharmappa <savinay.dharmappa@intel.com>
2018-05-14 17:19:23 -04:00
Andrew Boie
42a2c96422 newlib: fix heap user mode access for MPU devices
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.

MPU devices that don't have these restrictions allocate
the heap as normal.

In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.

Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.

On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.

Fixes: #6814

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-05-10 15:09:02 -07:00
Leandro Pereira
16472cafcf arch: x86: Use retpolines in core assembly routines
In order to mitigate Spectre variant 2 (branch target injection), use
retpolines for indirect jumps and calls.

The newly-added hidden CONFIG_X86_NO_SPECTRE flag, which is disabled
by default, must be set by a x86 SoC if its CPU performs speculative
execution.  Most targets supported by Zephyr do not, so this is
set to "y" by default.

A new setting, CONFIG_RETPOLINE, has been added to the "Security
Options" sections, and that will be enabled by default if
CONFIG_X86_NO_SPECTRE is disabled.

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-04-24 04:00:01 +05:30
Anas Nashif
993c350b92 cleanup: replace old jira numbers with GH issues
Replace all references to old JIRA issues (ZEP) with the corrosponding
Github issue ID.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-03-26 13:13:04 -04:00
Leandro Pereira
2f5659d4d2 arch: x86: Unwind the stack on fatal errors
Prints up to 8 stack frames, with the following format:
  RETURN_ADDR(ARGUMENTS)

Signed-off-by: Leandro Pereira <leandro.pereira@intel.com>
2018-03-16 14:12:15 -07:00
Andrew Boie
e00564d15d x86: fix logic for thread wrappers
If we enable CONFIG_DEBUG_INFO, then we need to fixup the stack
on thread entry so that the EFLAGS value in the EBP slot doesn't
confuse the debugger or any runtime stack unwinding code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-03-16 14:12:15 -07:00
Andrew Boie
c0771ea243 arch: x86: fix integer stub comments
The comment was obsolete; we simply do not allow use of the FPU or
vector math in ISRs. There is no desire to add such support, doing
this is properly offloaded to a worker thread.

Fixes #5283.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-03-06 17:16:03 -05:00
Andy Ross
9c62cc677d kernel: Add kswap.h header to unbreak cycles
The xtensa-asm2 work included a patch that added nano_internal.h
includes in lots of places that needed to have _Swap defined, because
it had to break a cycle and this no longer got pulled in from the arch
headers.

Unfortunately those new includes created new and more amusing cycles
elsewhere which led to breakage on other platforms.

Break out the _Swap definition (only) into a separate header and use
that instead.  Cleaner.  Seems not to have any more hidden gotchas.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2018-02-16 10:44:29 -05:00
Michael Scott
7a9b688526 x86: fix build warning
During compile of lwm2m_client using qemu_x86, the following build
warning was noticed:
zephyr/arch/x86/core/excstub.S:132:2: warning: "/*" within comment [-Wcomment]
  /*

In commit ff42bdd0a0 ("debug: remove option GDB_INFO"), the comment tag
was omitted.  Fix the comment end tag.

Signed-off-by: Michael Scott <michael@opensourcefoundries.com>
2018-02-12 19:21:06 -05:00
Anas Nashif
11a9625eaf debug: remove DEBUG_INFO option
This feature is X86 only and is not used or being tested. It is legacy
feature and no one can prove it actually works. Remove it until we have
proper documentation and samples and multi architecture support.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-02-12 13:58:28 -08:00
Anas Nashif
ff42bdd0a0 debug: remove option GDB_INFO
This feature is X86 only and is not used or being tested. It is legacy
feature and no one can prove it actually works. Remove it until we have
proper documentation and samples and multi architecture support.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-02-12 13:58:28 -08:00
Anas Nashif
b8ea7c889d x86: remove HAS_DTS checking
All X86 boards are now DTS enabled.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-01-29 10:38:32 -06:00
Anas Nashif
9f6c7838e5 arch: fix typo defafult -> default
Simple typo fix.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-01-08 08:08:45 -05:00
Adithya Baglody
9cde20aefa kernel: mem_domain: Add to current thread should configure immediately.
when a current thread is added to a memory domain the pages/sections
must be configured immediately.
A problem occurs when we add a thread to current and then drop
down to usermode. In such a case memory domain will become active
the next time a swap occurs.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-12-21 11:52:27 -08:00
Adithya Baglody
13ac4d4264 kernel: mem_domain: Add an arch interface to configure memory domain
Add an architecure specfic code for the memory domain
configuration. This is needed to support a memory domain API
k_mem_domain_add_thread.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-12-21 11:52:27 -08:00
Anas Nashif
429c2a4d9d kconfig: fix help syntax and add spaces
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-13 17:43:28 -06:00
Anas Nashif
abbaac9189 cleanup: remove nanokernel/nano leftovers
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-12-05 09:44:23 -06:00
Adithya Baglody
808ad6101e x86: swap: save the scratch pad registers.
Save the required scratch pad register (in this case only edx)
before calling the C function.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-27 11:50:50 -05:00
Sebastian Bøe
0829ddfe9a kbuild: Removed KBuild
Signed-off-by: Sebastian Boe <sebastian.boe@nordicsemi.no>
2017-11-08 20:00:22 -05:00
Sebastian Bøe
12f8f76165 Introduce cmake-based rewrite of KBuild
Introducing CMake is an important step in a larger effort to make
Zephyr easy to use for application developers working on different
platforms with different development environment needs.

Simplified, this change retains Kconfig as-is, and replaces all
Makefiles with CMakeLists.txt. The DSL-like Make language that KBuild
offers is replaced by a set of CMake extentions. These extentions have
either provided simple one-to-one translations of KBuild features or
introduced new concepts that replace KBuild concepts.

This is a breaking change for existing test infrastructure and build
scripts that are maintained out-of-tree. But for FW itself, no porting
should be necessary.

For users that just want to continue their work with minimal
disruption the following should suffice:

Install CMake 3.8.2+

Port any out-of-tree Makefiles to CMake.

Learn the absolute minimum about the new command line interface:

$ cd samples/hello_world
$ mkdir build && cd build
$ cmake -DBOARD=nrf52_pca10040 ..

$ cd build
$ make

PR: zephyrproject-rtos#4692
docs: http://docs.zephyrproject.org/getting_started/getting_started.html

Signed-off-by: Sebastian Boe <sebastian.boe@nordicsemi.no>
2017-11-08 20:00:22 -05:00
Adithya Baglody
538fa7b37c x86: MMU: Configure page tables entries for memory domain in swap.
During swap the required page tables are configured. The outgoing
thread's memory domain pages are reset and the incoming thread's
memory domain is loaded. The pages are configured if userspace
is enabled and if memory domain has been initialized before
calling swap.

GH-3852

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Adithya Baglody
f7b0731ce4 x86: MMU: Memory domain implementation for x86
Added support for memory domain implementation.

GH-3852

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-11-07 12:22:43 -08:00
Andrew Boie
2a8684f60c x86: de-couple user mode and HW stack protection
This is intended for memory-constrained systems and will save
4K per thread, since we will no longer reserve room for or
activate a kernel stack guard page.

If CONFIG_USERSPACE is enabled, stack overflows will still be
caught in some situations:

1) User mode threads overflowing stack, since it crashes into the
kernel stack page
2) Supervisor mode threads overflowing stack, since the kernel
stack page is marked non-present for non-user threads

Stack overflows will not be caught:

1) When handling a system call
2) When the interrupt stack overflows

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-11-07 09:31:49 -08:00
Gustavo Lima Chaves
97a8716a4f x86: Jailhouse port, tested for UART (# 0, polling) and LOAPIC timer
This is an introductory port for Zephyr to be run as a Jailhouse
hypervisor[1]'s "inmate cell", on x86 64-bit CPUs (running on 32-bit
mode). This was tested with their "tiny-demo" inmate demo cell
configuration, which takes one of the CPUs of the QEMU-VM root cell
config, along with some RAM and serial controller access (it will even
do nice things like reserving some L3 cache for it via Intel CAT) and
Zephyr samples:

   - hello_world
   - philosophers
   - synchronization

The final binary receives an additional boot sequence preamble that
conforms to Jailhouse's expectations (starts at 0x0 in real mode). It
will put the processor in 32-bit protected mode and then proceed to
Zephyr's __start function.

Testing it is just a matter of:
  $ mmake -C samples/<sample_dir> BOARD=x86_jailhouse JAILHOUSE_QEMU_IMG_FILE=<path_to_image.qcow2> run
  $ sudo insmod <path to jailhouse.ko>
  $ sudo jailhouse enable <path to configs/qemu-x86.cell>
  $ sudo jailhouse cell create <path to configs/tiny-demo.cell>
  $ sudo mount -t 9p -o trans/virtio host /mnt
  $ sudo jailhouse cell load tiny-demo /mnt/zephyr.bin
  $ sudo jailhouse cell start tiny-demo
  $ sudo jailhouse cell destroy tiny-demo
  $ sudo jailhouse disable
  $ sudo rmmod jailhouse

For the hello_world demo case, one should then get QEMU's serial port
output similar to:

"""
Created cell "tiny-demo"
Page pool usage after cell creation: mem 275/1480, remap 65607/131072
Cell "tiny-demo" can be loaded
CPU 3 received SIPI, vector 100
Started cell "tiny-demo"
***** BOOTING ZEPHYR OS v1.9.0 - BUILD: Sep 12 2017 20:03:22 *****
Hello World! x86
"""

Note that the Jailhouse's root cell *has to be started in xAPIC
mode* (kernel command line argument 'nox2apic') in order for this to
work. x2APIC support and its reasoning will come on a separate commit.

As a reminder, the make run target introduced for x86_jailhouse board
involves a root cell image with Jailhouse in it, to be launched and then
partitioned (with >= 2 64-bit CPUs in it).

Inmate cell configs with no JAILHOUSE_CELL_PASSIVE_COMMREG flag
set (e.g. apic-demo one) would need extra code in Zephyr to deal with
cell shutdown command responses from the hypervisor.

You may want to fine tune CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC for your
specific CPU—there is no detection from Zephyr with regard to that.

Other config differences from pristine QEMU defaults worth of mention
are:

   - there is no HPET when running as Jailhouse guest. We use the LOAPIC
     timer, instead
   - there is no PIC_DISABLE, because there is no 8259A PIC when running
     as a Jailhouse guest
   - XIP makes no sense also when running as Jailhouse guest, and both
     PHYS_RAM_ADDR/PHYS_LOAD_ADD are set to zero, what tiny-demo cell
     config is set to

This opens up new possibilities for Zephyr, so that usages beyond just
MCUs come to the table. I see special demand coming from
functional-safety related use cases on industry, automotive, etc.

[1] https://github.com/siemens/jailhouse

Reference to Jailhouse's booting preamble code:

Origin: Jailhouse
License: BSD 2-Clause
URL: https://github.com/siemens/jailhouse
commit: 607251b44397666a3cbbf859d784dccf20aba016
Purpose: Dual-licensing of inmate lib code
Maintained-by: Zephyr

Signed-off-by: Gustavo Lima Chaves <gustavo.lima.chaves@intel.com>
2017-11-07 08:58:49 -05:00
Anas Nashif
780324b8ed cleanup: rename fiber/task -> thread
We still have many places talking about tasks and threads, replace those
with thread terminology.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-10-30 18:41:15 -04:00
Andrew Boie
3f508e911e x86: fix CONFIG_DEBUG_INFO build error
This doesn't have any register operands and needs a size suffix.
Fixes: #4480

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-24 12:50:13 -07:00
Adithya Baglody
76edf8a681 x86: MMU: Enable boot time PAE page tables.
In PAE boot tables the __mmu_tables_start points to page directory
pointer (PDPT). Enable the PAE by updating the CR4.PAE and
IA32_EFER.NXE bits.

JIRA:ZEP-2511

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-23 10:13:07 -07:00
Adithya Baglody
725de70d86 x86: MMU: Create PAE page structures and unions.
Created structures and unions needed to enable the software to
access these tables.
Also updated the helper macros to ease the usage of the MMU page
tables.

JIRA: ZEP-2511

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-23 10:13:07 -07:00
Andrew Boie
d7631ec7e4 Revert "x86: MMU: Memory domain implementation for x86"
This reverts commit d0f6ce2d98.
2017-10-20 15:02:59 -04:00
Andrew Boie
de777adf7b Revert "x86: MMU: Configure page tables entries for memory domain in swap."
This reverts commit a8b9353421.
2017-10-20 15:02:59 -04:00
Adithya Baglody
a8b9353421 x86: MMU: Configure page tables entries for memory domain in swap.
During swap the required page tables are configured. The outgoing
thread's memory domain pages are reset and the incoming thread's
memory domain is loaded. The pages are configured if userspace
is enabled and if memory domain has been initialized before
calling swap.

GH-3852

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-20 10:39:51 -07:00
Adithya Baglody
d0f6ce2d98 x86: MMU: Memory domain implementation for x86
Added support for memory domain implementation.

GH-3852

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2017-10-20 10:39:51 -07:00
Andrew Boie
c5c104f91e kernel: fix k_thread_stack_t definition
Currently this is defined as a k_thread_stack_t pointer.
However this isn't correct, stacks are defined as arrays. Extern
references to k_thread_stack_t doesn't work properly as the compiler
treats it as a pointer to the stack array and not the array itself.

Declaring as an unsized array of k_thread_stack_t doesn't work
well either. The least amount of confusion is to leave out the
pointer/array status completely, use pointers for function prototypes,
and define K_THREAD_STACK_EXTERN() to properly create an extern
reference.

The definitions for all functions and struct that use
k_thread_stack_t need to be updated, but code that uses them should
be unchanged.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-17 08:24:29 -07:00
Andrew Boie
094f2cb77b x86: fix crash in _x86_mmu_get_flags
Looking up the PTE flags was page faulting if the address wasn't
marked as present in the page directory, since there is no page table
for that directory entry.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-17 08:16:14 -07:00
Andrew Boie
73a5fe77f8 x86: fix stack overflow in double fault handler
At very low optimization levels, the call to
K_THREAD_STACK_BUFFER doesn't get inlined, overflowing the
tiny stack.

Replace with _ARCH_THREAD_STACK_BUFFER() which on x86 is
just a macro.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-16 10:53:48 -07:00
Andrew Boie
3e3a237930 x86: fix stack zeroing when dropping to user mode
For 'rep stosl' ECX isn't a size value, it's how many times to repeat
the 4-byte string copy operation.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-10-05 18:49:09 -04:00
Andrew Boie
13ca6fe284 syscalls: reorganize headers
- syscall.h now contains those APIs needed to support invoking calls
  from user code. Some stuff moved out of main kernel.h.
- syscall_handler.h now contains directives useful for implementing
  system call handler functions. This header is not pulled in by
  kernel.h and is intended to be used by C files implementing kernel
  system calls and driver subsystem APIs.
- syscall_list.h now contains the #defines for system call IDs. This
  list is expected to grow quite large so it is put in its own header.
  This is now an enumerated type instead of defines to make things
  easier as we introduce system calls over the new few months. In the
  fullness of time when we desire to have a fixed userspace/kernel ABI,
  this can always be converted to defines.

Some new code added:

- _SYSCALL_MEMORY() macro added to check memory regions passed up from
  userspace in handler functions
- _syscall_invoke{7...10}() inline functions declare for invoking system
  calls with more than 6 arguments. 10 was chosen as the limit as that
  corresponds to the largest arg list we currently have
  which is for k_thread_create()

Other changes

- auto-generated K_SYSCALL_DECLARE* macros documented
- _k_syscall_table in userspace.c is not a placeholder. There's no
  strong need to generate it and doing so would require the introduction
  of a third build phase.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-28 08:56:20 -07:00
Andrew Boie
1956f09590 kernel: allow up to 6 arguments for system calls
A quick look at "man syscall" shows that in Linux, all architectures
support at least 6 argument system calls, with a few supporting 7. We
can at least do 6 in Zephyr.

x86 port modified to use EBP register to carry the 6th system call
argument.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-20 09:18:59 -07:00
Andrew Boie
a23c245a9a userspace: flesh out internal syscall interface
* Instead of a common system call entry function, we instead create a
table mapping system call ids to handler skeleton functions which are
invoked directly by the architecture code which receives the system
call.

* system call handler prototype specified. All but the most trivial
system calls will implement one of these. They validate all the
arguments, including verifying kernel/device object pointers, ensuring
that the calling thread has appropriate access to any memory buffers
passed in, and performing other parameter checks that the base system
call implementation does not check, or only checks with __ASSERT().

It's only possible to install a system call implementation directly
inside this table if the implementation has a return value and requires
no validation of any of its arguments.

A sample handler implementation for k_mutex_unlock() might look like:

u32_t _syscall_k_mutex_unlock(u32_t mutex_arg, u32_t arg2, u32_t arg3,
                              u32_t arg4, u32_t arg5, void *ssf)
{
        struct k_mutex *mutex = (struct k_mutex *)mutex_arg;
        _SYSCALL_ARG1;

        _SYSCALL_IS_OBJ(mutex, K_OBJ_MUTEX, 0,  ssf);
        _SYSCALL_VERIFY(mutex->lock_count > 0, ssf);
        _SYSCALL_VERIFY(mutex->owner == _current, ssf);

        k_mutex_unlock(mutex);

        return 0;
}

* the x86 port modified to work with the system call table instead of
calling a common handler function. fixed an issue where registers being
changed could confuse the compiler has been fixed; all registers, even
ones used for parameters, must be preserved across the system call.

* a new arch API for producing a kernel oops when validating system call
arguments added. The debug information reported will be from the system
call site and not inside the handler function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-15 13:44:45 -07:00
Andrew Boie
424e993b41 x86: implement userspace APIs
- _arch_user_mode_enter() implemented
- _arch_is_user_context() implemented
- _new_thread() will honor K_USER option if passed in
- System call triggering macros implemented
- _thread_entry_wrapper moved and now looks for the next function to
call in EDI

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 12:46:36 -07:00
Anas Nashif
1e8afbfe5a cleanup: remove lots of references to unified kernel
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2017-09-12 12:37:11 -04:00
Andrew Boie
d81f9c1e4d x86: revise _x86_mmu_buffer_validate
- There's no point in building up "validity" (declared volatile for some
  strange reason), just exit with false return value if any of the page
  directory or page table checks don't come out as expected

- The function was returning the opposite value as its documentation
  (0 on success, -EPERM on failure). Documentation updated.

- This function will only be used to verify buffers from user-space.
  There's no need for a flags parameter, the only option that needs to
  be passed in is whether the buffer has write permissions or not.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 08:40:41 -07:00
Andrew Boie
3bb677d6eb x86: don't set FS/GS segment selectors
We shouldn't be imposing any policy here, we do not yet use these in
Zephyr. Zero these at boot and otherwise leave alone.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-12 08:40:08 -07:00
Andrew Boie
1e06ffc815 zephyr: use k_thread_entry_t everywhere
In various places, a private _thread_entry_t, or the full prototype
were being used. Be consistent and use the same typedef everywhere.

Signen-off-by: Andrew Boie <andrew.p.boie@intel.com>
2017-09-11 11:18:22 -07:00