Commit graph

166 commits

Author SHA1 Message Date
Andrew Boie 768a30c14f x86: organize 64-bit ESF
The callee-saved registers have been separated out and will not
be saved/restored if exception debugging is shut off.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-08 08:51:43 -05:00
Andy Ross 0e32f4dab0 arch/x86_64: Save RFLAGS during arch_switch()
The context switch implementation forgot to save the current flag
state of the old thread, so on resume the flags would be restored to
whatever value they had at the last interrupt preemption or thread
initialization.  In practice this guaranteed that the interrupt enable
bit would always be wrong, becuase obviously new threads and preempted
ones have interrupts enabled, while arch_switch() is always called
with them masked.  This opened up a race between exit from
arch_switch() and the final exit path in z_swap().

The other state bits weren't relevant -- the oddball ones aren't used
by Zephyr, and as arch_switch() on this architecture is a function
call the compiler would have spilled the (caller-save) comparison
result flags anyway.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Andy Ross 5b85d6da6a arch/x86_64: Poison instruction pointer of running threads
There was a bug where double-dispatch of a single thread on multiple
SMP CPUs was possible.  This can be mind-bending to diagnose, so when
CONFIG_ASSERT is enabled add an extra instruction to __resume (the
shared code path for both interupt return and context switch) that
poisons the shared RIP of the now-running thread with a recognizable
invalid value.

Now attempts to run the thread again will crash instantly with a
discoverable cookie in their instruction pointer, and this will remain
true until it gets a new RIP at the next interrupt or switch.

This is under CONFIG_ASSERT because it meets the same design goals of
"a cheap test for impossible situations", not because it's part of the
assertion framework.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Andy Ross 86430d8d46 kernel: arch: Clarify output switch handle requirements in arch_switch
The original intent was that the output handle be written through the
pointer in the second argument, though not all architectures used that
scheme.  As it turns out, that write is becoming a synchronization
signal, so it's no longer optional.

Clarify the documentation in arch_switch() about this requirement, and
add an instruction to the x86_64 context switch to implement it as
original envisioned.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andrew Boie e34f1cee06 x86: implement kernel page table isolation
Implement a set of per-cpu trampoline stacks which all
interrupts and exceptions will initially land on, and also
as an intermediate stack for privilege changes as we need
some stack space to swap page tables.

Set up the special trampoline page which contains all the
trampoline stacks, TSS, and GDT. This page needs to be
present in the user page tables or interrupts don't work.

CPU exceptions, with KPTI turned on, are treated as interrupts
and not traps so that we have IRQs locked on exception entry.

Add some additional macros for defining IDT entries.

Add special handling of locore text/rodata sections when
creating user mode page tables on x86-64.

Restore qemu_x86_64 to use KPTI, and remove restrictions on
enabling user mode on x86-64.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-17 16:17:39 -05:00
Andrew Boie 2690c9e550 x86: move some per-cpu initialization to C
No reason we need to stay in assembly domain once we have
GS and a stack set up.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie a594ca7c8f kernel: cleanup and formally define CPU start fn
The "key" parameter is legacy, remove it.

Add a typedef for the expected function pointer type.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie 4fcf28ef25 x86: mitigate swapgs Spectre V1 attacks
See CVE-2019-1125. We mitigate this by adding an 'lfence'
upon interrupt/exception entry after the decision has been
made whether it's necessary to invoke 'swapgs' or not.

Only applies to x86_64, 32-bit doesn't use swapgs.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie 3d80208025 x86: implement user mode on 64-bit
- In early boot, enable the syscall instruction and set up
  necessary MSRs
- Add a hook to update page tables on context switch
- Properly initialize thread based on whether it will
  start in user or supervisor mode
- Add landing function for system calls to execute the
  desired handler
- Implement arch_user_string_nlen()
- Implement logic for dropping a thread down to user mode
- Reserve per-CPU storage space for user and privilege
  elevation stack pointers, necessary for handling syscalls
  when no free registers are available
- Proper handling of gs register considerations when
  transitioning privilege levels

Kernel page table isolation (KPTI) is not yet implemented.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie 077b587447 x86: implement hw-based oops for both variants
We use a fixed value of 32 as the way interrupts/exceptions
are setup in x86_64's locore.S do not lend themselves to
Kconfig configuration of the vector to use.

HW-based kernel oops is now permanently on, there's no reason
to make it optional that I can see.

Default vectors for IPI and irq offload adjusted to not
collide.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie ded0185eb8 x86: add GDT descriptors for user mode
These are arranged in the particular order required
by the syscall/sysret instructions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie 692fda47fc x86: use MSRs for %gs
We don't need to set up GDT data descriptors for setting
%gs. Instead, we use the x86 MSRs to set GS_BASE and
KERNEL_GS_BASE.

We don't currently allow user mode to set %gs on its own,
but later on if we do, we have everything set up to issue
'swapgs' instructions on syscall or IRQ.

Unused entries in the GDT have been removed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie 10d033ebf0 x86: enable recoverable exceptions on 64-bit
These were previously assumed to always be fatal.
We can't have the faulting thread's XMM registers
clobbered, so put the SIMD/FPU state onto the stack
as well. This is fairly large (512 bytes) and the
execption stack is already uncomfortably small, so
increase to 2K.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-13 16:35:10 -05:00
Andrew Boie 2b67ca8ac9 x86: improve exception debugging
We now dump more information for less common cases,
and this is now centralized code for 32-bit/64-bit.
All of this code is now correctly wrapped around
CONFIG_EXCEPTION_DEBUG. Some cruft and unused defines
removed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-12-17 11:39:22 -08:00
Andrew Boie 4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Andrew Boie 0b27cacabd x86: up-level page fault handling
Now in common code for 32-bit and 64-bit.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 11:19:38 -05:00
Andrew Boie 7d1ae023f8 x86: enable stack overflow detection on 64-bit
CONFIG_HW_STACK_PROTECTION is now available.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-06 17:50:34 -08:00
Andrew Boie 9fb89ccf32 x86: consolidate some fatal error code
Some code for unwinding stacks and z_x86_fatal_error()
now in a common C file, suitable for both modes.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-06 17:50:34 -08:00
Andrew Boie e1f2292895 x86: intel64: set page tables for every CPU
The page tables to use are now stored in the cpuboot struct.
For the first CPU, we set to the flat page tables, and then
update later in z_x86_prep_c() once the runtime tables have
been generated.

For other CPUs, by the time we get to z_arch_start_cpu()
the runtime tables are ready do go, and so we just install
them directly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie 46540f4bb1 intel64: don't set global flag in ptables
May cause issues when runtime page tables are installed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie cdd721db3b locore: organize data by type
Program text, rodata, and data need different MMU
permissions. Split out rodata and data from the program
text, updating the linker script appropriately.

Region size symbols added to the linker script, so these
can later be used with MMU_BOOT_REGION().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-24 12:48:45 -07:00
Andrew Boie 979b17f243 kernel: activate arch interface headers
Duplicate definitions elsewhere have been removed.

A couple functions which are defined by the arch interface
to be non-inline, but were implemented inline by native_posix
and intel64, have been moved to non-inline.

Some missing conditional compilation for z_arch_irq_offload()
has been fixed, as this is an optional feature.

Some massaging of native_posix headers to get everything
in the right scope.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-21 10:13:38 -07:00
Andy Ross 8bc3b6f673 arch/x86/intel64: Fix assumption with dummy threads
The intel64 switch implementation doesn't actually use a switch handle
per se, just the raw thread struct pointers which get stored into the
handle field.  This works fine for normally initialized threads, but
when switching out of a dummy thread at initialization, nothing has
initialized that field and the code was dumping registers into the
bottom of memory through the resulting NULL pointer.

Fix this by skipping the load of the field value and just using an
offset instead to get the struct address, which is actually slightly
faster anyway (a SUB immediate instruction vs. the load).

Actually for extra credit we could even move the switch_handle field
to the top of the thread struct and eliminate the instruction
entirely, though if we did that it's probably worth adding some
conditional code to make the switch_handle field disappear entirely.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-10-19 12:09:32 -07:00
Andrew Boie 9c76d28bee x86: intel64: fatal: minor formatting improvements
Line up everything nicely, add leading '0x' to hex
addresses, and remove redundant newlines. Add
whitespace between the register name and contents
so the contents can be easily selected from a terminal.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-15 09:00:49 -07:00
Andrew Boie fdd3ba896a x86: intel64: set the WP bit for paging
Otherwise, supervisor mode can write to read-only areas,
failing tests that check this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-15 09:00:49 -07:00
Andrew Boie e340f8d22e x86: intel64: enable no-execute
Set the NXE bit in the EFER MSR so that the NX bit can
be set in page tables. Otherwise, the NX bit is treated
as reserved and leads to a fault if set.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-10 13:41:02 -07:00
Andrew Boie 7f715bd113 intel64: add inline definition of z_arch_switch()
This just calls the assembly code. Needed to be
inline per the architecture API.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-09 09:14:18 -04:00
Charles E. Youse f7cfb4303b arch/x86: do not assume MP means SMP
It's possible to have multiple processors configured without using the
SMP scheduler, so don't make definitions dependent on CONFIG_SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse e6a31a9e89 arch/x86: (Intel64) initialize TSS interrupt stack from cpuboot[]
In non-SMP MP situations, the interrupt stacks might not exist, so
do not assume they do. Instead, initialize the TSS IST1 from the
cpuboot[] vector (meaning, on APs, the stack from z_arch_start_cpu).
Eliminates redundancy at the same time.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 643661cb44 arch/x86: declare z_x86_prep_c() in kernel_arch_func.h
And remove the ad hoc prototype in cpu.c for Intel64.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 5abab591c2 arch/x86: (Intel64) make z_arch_start_cpu() synchronous
Don't leave z_arch_start_cpu() until the target CPU has been started.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 3b145c0d4b arch/x86: (Intel64) do not lock interrupts around irq_offload()
This is the Wrong Thing(tm) with SMP enabled. Previously this
worked because interrupts would be re-enabled in the interrupt
entry sequence, but this is no longer the case.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 66510db98c arch/x86: (Intel64) add scheduler IPI support
Add z_arch_sched_ipi() and such to enable scheduler IPIs when SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 74e3717af6 arch/x86: (Intel64) fix conditional assembly in locore.S
was ignoring the rest of the expression, though the effect was
harmless (including unreachable code in some builds).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 3eb1a8b59a arch/x86: (Intel64) implement SMP support
Add duplicate per-CPU data structures (x86_cpuboot, tss, stacks, etc.)
for up to 4 total CPUs, add code in locore and z_arch_start_cpu().

The test board, qemu_x86_long, now defaults to 2 CPUs.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 2808908816 arch/x86: alter signature of z_x86_prep_c() function
Take a dummy first argument, so that the BSP entry point (z_x86_prep_c)
has the same signature as the AP entry point (smp_init_top).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse f9eaee35b8 arch/x86: (Intel64) use per-CPU parameter struct for CPU startup
A new 'struct x86_cpuboot' is created as well as an instance called
'x86_cpuboot[]' which contains per-CPU boot data (initial stack,
entry function/arg, selectors, etc.). The locore now consults this
table to set up per-CPU registers, etc. during early boot.

Also, rename tss.c to cpu.c as its scope is growing.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse edf5761c83 arch/x86: (Intel64) rename kernel segment constants
There's no need to qualify the 64-bit CS/DS selectors, and the GS and
TR selectors are renamed CPU0_GS and CPU0_TR as they are CPU-specific.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 90bf0da332 arch/x86: (Intel64) optimize and re-order startup assembly sequence
In some places the code was being overly pedantic; e.g., there is no
need to load our own 32-bit descriptors because the loader's are fine
for our purposes. We can defer loading our own segments until 64-bit.

The sequence is re-ordered to faciliate code sharing between the BSP
and APs when SMP is enabled (all BSP-specific operations occur before
the per-CPU initialization).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 17e135bc41 arch/x86: (Intel64) clear BSS before entering long mode
This is really just to facilitate CPU bootstrap code between
the BSP and the APs, moving the clear operation out of the way.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse a981f51fe6 arch/x86: drivers/loapic_intr.c: move local APIC initialization
In the general case, the local APIC can't be treated as a normal device
with a single boot-time initialization - on SMP systems, each CPU must
initialize its own. Hence the initialization proper is separated from
the device-driver initialization, and said initialization is called
from the early startup-assembly code when appropriate.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 418e5c1b38 arch/x86: factor out common assembly startup code
The 32-bit and 64-bit assembly startup sequences share quite a
bunch of common code, so it's factored out into one file to avoid
repeating ourselves (and potentially falling out of sync).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse 8279af76c8 arch/x86: elevate prep_c.c to common code
Elevate the previously 32-bit-only z_x86_prep_c() function to common
code, so both 32-bit and 64-bit arches now enter the kernel this way.
Minor changes to prep_c.c to make it build with the SMP scheduler on.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Charles E. Youse cc9be2e982 arch/x86: (Intel64) start up on _interrupt_stack, not _exception_stack
Simply for consistency with other platforms.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-10-07 19:46:55 -04:00
Andrew Boie 89d4c6928e kernel: add arch abstraction for irq_offload()
This makes it clearer that this is an API that is expected
to be implemented at the architecture level.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-01 11:11:42 +02:00
Andrew Boie 61901ccb4c kernel: rename z_new_thread()
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Charles E. Youse 200056df2f arch/x86: rename CONFIG_X86_MULTIBOOT and related to CONFIG_MULTIBOOT
Simple naming change, since MULTIBOOT is clear enough by itself and
"namespacing" it to X86 is unnecessary and/or inappropriate.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-29 12:30:34 -07:00
Charles E. Youse 66a2ed2360 arch/x86: (Intel64) move RAX to volatile register set
This used to be part of the "restore always" set of registers because
__swap was expected to return a value.  No longer required, so RAX is
moved to the volatile registers and we save a few cycles occasionally.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse 074ce889fb arch/x86: (Intel64) migrate from __swap to z_arch_switch()
The latter primitive is required for SMP.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse 32fc239aa2 arch/x86: (Intel64) use TSS for per-CPU variables
A space is allocated in the TSS for per-CPU variables. At present,
this is only a 'struct _cpu *' to find the _kernel CPU struct. The
locore routines are rewritten to find _current and _nested via this
pointer rather than referencing the _kernel global directly.

This is obviously in preparation for SMP support.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse 1d8c80bc05 arch/x86: (Intel64) move STACK_SENTINEL check
This function call was erroneously inserted between the instruction
that set the Z flag and the instruction that tested the Z flag. The
call is moved up a few instructions where it can't junk CPU state.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse e4d5ab363c arch/x86: (Intel64) define TSS in C, not assembly
Declare the 64-bit TSS as a struct, and define the instance in C.
Add a data segment selector that overlaps the TSS and keep that
loaded in GS so we can access the TSS via a segment-override prefix.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse a8de9577c9 arch/x86: restructure ISR stacks (conceptually)
This is largely a conceptual change rather than an actual change.
Instead of using an array of interrupt stacks (one for each IRQ
nesting level), we use one interrupt stack and subdivide it. The
effect is the same, but this is more in line with the Zephyr model
of one ISR stack per CPU (as reflected in init.c).

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse bd094ddac2 arch/x86: inline x2APIC EOI in 64-bit code
Like its 32-bit sibling, the 64-bit code should EOI inline rather than
invoking a function. Defeats the performance advantages of x2APIC.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-23 17:50:09 -07:00
Charles E. Youse 3036faf88a tests/benchmarks: fix BOOT_TIME_MEASUREMENT
The boot time measurement sample was giving bogus values on x86: an
assumption was made that the system timer is in sync with the CPU TSC,
which is not the case on most x86 boards.

Boot time measurements are no longer permitted unless the timer source
is the local APIC. To avoid issues of TSC scaling, the startup datum
has been forced to 0, which is in line with the ARM implementation
(which is the only other platform which supports this feature).

Cleanups along the way:

As the datum is now assumed zero, some variables are removed and
calculations simplified. The global variables involved in boot time
measurements are moved to the kernel.h header rather than being
redeclared in every place they are referenced. Since none of the
measurements actually use 64-bit precision, the samples are reduced
to 32-bit quantities.

In addition, this feature has been enabled in long mode.

Fixes: #19144

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-21 16:43:26 -07:00
Charles E. Youse a224998355 arch/x86/intel64: do not use thread_state for arch data
k_thread.thread_state (or rather, _thread_base.thread_state) should be
private to the kernel/scheduler, so flags previously stored there are
moved to _thread_arch where the belong.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-20 14:31:18 -04:00
Charles E. Youse dc0314af7f arch/x86: honor CONFIG_INIT_STACKS in 64-bit mode
Initialize the IRQ stacks with 0xAA bytes when the option is enabled.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse d506489999 arch/x86: optimize nested IRQ entry/exit
We don't need to save the ABI caller-save registers here, because
we don't preempt threads from nested IRQ contexts.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse 468cd4d98f arch/x86: add support for CONFIG_STACK_SENTINEL
Apparently I missed the arch-dependent bit of this. Fixed.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse a5eea17dda arch/x86: add SSE floating-point to Intel64 subarch
This is a naive implementation which does "eager" context switching
for floating-point context, which, of course, introduces performance
concerns. Other approaches have security concerns, SMP implications,
and impact the x86 arch and Zephyr project as a whole. Discussion is
needed, so punting with the straightforward solution for now.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse 2bb59fc84e arch/x86: add nested interrupt support to Intel64
Add support for multiple IRQ stacks and interrupt nesting.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse cdb9ac3895 arch/x86: Add exception reporting code for Intel64
Fleshed out z_arch_esf_t and added code to build this frame when
exceptions occur. Created a separate small stack for exceptions and
shifted the initialization code to use this instead of the IRQ stack.

Moved IRQ stack(s) to irq.c.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse a10f2601cc arch/x86: add IRQ offloading to Intel64 subarch
The IRQ_OFFLOAD_VECTOR config option is also moved to the arch level,
as it is shared between both 32- and 64-bit subarches.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse 0e0199387a arch/x86: set default stack sizes
Using the arch Kconfig here, instead of kernel/Kconfig. Intel64 with
the SysV ABI requires some pretty big stacks. These 4K-8K defaults
are arguably a bit small, but the Zephyr defaults are REALLY too small.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse 4ddaa59a89 arch/x86: initial Intel64 support
First "complete" version of Intel64 support for x86. Compilation of
apps for supported boards (read: up_squared) with CONFIG_X86_LONGMODE=y
is now working. Booting, device drivers, interrupts, scheduling, etc.
appear to be functioning properly. Beware that this is ALHPA quality,
not ready for production use, but the port has advanced far enough that
it's time to start working through the test suite and samples, fleshing
out any missing features, and squashing bugs.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00
Charles E. Youse 34307a54f0 arch/x86: initial Intel64 bootstrap framework
This patch adds basic build infrastructure, definitions, a linker
script, etc. to use the Zephyr and 0.10.1 SDK to build a 64-bit
ELF binary suitable for use with GRUB to minimally bootstrap an
Apollo Lake (e.g., UpSquared) board. The resulting binary can hardly
be called a Zephyr kernel as it is lacking most of the glue logic,
but it is a starting point to flesh those out in the x86 tree.

The "kernel" builds with a few harmless warnings, both with GCC from
the Zephyr SDK and with ICC (which is currently being worked on in
a separate branch). These warnings are either related to pointer size
differences (since this is an LP64 build) and/or dummy functions
that will be replaced with working versions shortly.

Signed-off-by: Charles E. Youse <charles.youse@intel.com>
2019-09-15 11:33:47 +08:00