It's possible to have multiple processors configured without using the
SMP scheduler, so don't make definitions dependent on CONFIG_SMP.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
In non-SMP MP situations, the interrupt stacks might not exist, so
do not assume they do. Instead, initialize the TSS IST1 from the
cpuboot[] vector (meaning, on APs, the stack from z_arch_start_cpu).
Eliminates redundancy at the same time.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This is the Wrong Thing(tm) with SMP enabled. Previously this
worked because interrupts would be re-enabled in the interrupt
entry sequence, but this is no longer the case.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
was ignoring the rest of the expression, though the effect was
harmless (including unreachable code in some builds).
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Trivial change to the Kconfig: the first 32 vectors are reserved,
so it's not possible to have 256 IRQ vectors. Change max to 224.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Add duplicate per-CPU data structures (x86_cpuboot, tss, stacks, etc.)
for up to 4 total CPUs, add code in locore and z_arch_start_cpu().
The test board, qemu_x86_long, now defaults to 2 CPUs.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Take a dummy first argument, so that the BSP entry point (z_x86_prep_c)
has the same signature as the AP entry point (smp_init_top).
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
A new 'struct x86_cpuboot' is created as well as an instance called
'x86_cpuboot[]' which contains per-CPU boot data (initial stack,
entry function/arg, selectors, etc.). The locore now consults this
table to set up per-CPU registers, etc. during early boot.
Also, rename tss.c to cpu.c as its scope is growing.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
There's no need to qualify the 64-bit CS/DS selectors, and the GS and
TR selectors are renamed CPU0_GS and CPU0_TR as they are CPU-specific.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
In some places the code was being overly pedantic; e.g., there is no
need to load our own 32-bit descriptors because the loader's are fine
for our purposes. We can defer loading our own segments until 64-bit.
The sequence is re-ordered to faciliate code sharing between the BSP
and APs when SMP is enabled (all BSP-specific operations occur before
the per-CPU initialization).
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This is really just to facilitate CPU bootstrap code between
the BSP and the APs, moving the clear operation out of the way.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
In the general case, the local APIC can't be treated as a normal device
with a single boot-time initialization - on SMP systems, each CPU must
initialize its own. Hence the initialization proper is separated from
the device-driver initialization, and said initialization is called
from the early startup-assembly code when appropriate.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
The 32-bit and 64-bit assembly startup sequences share quite a
bunch of common code, so it's factored out into one file to avoid
repeating ourselves (and potentially falling out of sync).
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
The linker script was missing symbols that defined the boundaries
of kernel memory segments (_image_rom_end, etc.). These are added
so that core/memmap.c can properly account for those segments.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Elevate the previously 32-bit-only z_x86_prep_c() function to common
code, so both 32-bit and 64-bit arches now enter the kernel this way.
Minor changes to prep_c.c to make it build with the SMP scheduler on.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
And set qemu_x86_long board to build with CONFIG_SMP=y by default.
Apparently two benchmark tests - latency_measure and sys_kernel -
do not work with the SMP scheduler, so those tests are disabled.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
This patch is a preparatory step in enabling the MMU in
long mode; no steps are taken to implement long mode support.
We introduce struct x86_page_tables, which represents the
top-level data structure for page tables:
- For 32-bit, this will contain a four-entry page directory
pointer table (PDPT)
- For 64-bit, this will (eventually) contain a page map level 4
table (PML4)
In either case, this pointer value is what gets programmed into
CR3 to activate a set of page tables. There are extra bits in
CR3 to set for long mode, we'll get around to that later.
This abstraction will allow us to use the same APIs that work
with page tables in either mode, rather than hard-coding that
the top level data structure is a PDPT.
z_x86_mmu_validate() has been re-written to make it easier to
add another level of paging for long mode, to support 2MB
PDPT entries, and correctly validate regions which span PDPTE
entries.
Some MMU-related APIs moved out of 32-bit x86's arch.h into
mmustructs.h.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This patch re-namespaces global variables and functions
that are used only within the arch/arm/ code to be
prefixed with z_arm_.
Some instances of CamelCase have been corrected.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
* the old codes may not save the caller saved regs correctly,
e.g. r7- r12. Because the sys call entry is called in the form
of static inline function. The compiler optimizations may not save
all the caller saved regs.
* new codes use the irq stack frame as the sys call frame and gurantee
all the called saved regs are pushed and popped correctly.
* the side effect of new codes are more stack operations and a little
overhead.
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
This makes it clearer that this is an API that is expected
to be implemented at the architecture level.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The main and idle threads, and their associated stacks,
were being referenced in various parts of the kernel
with no central definition. Expose these in kernel_internal.h
and namespace with z_ appropriately.
The main and idle threads were being defined statically,
with another variable exposed to contain their pointer
value. This wastes a bit of memory and isn't accessible
to user threads anyway, just expose the actual thread
objects.
Redundance MAIN_STACK_SIZE and IDLE_STACK_SIZE defines
in init.c removed, just use the Kconfigs they derive
from.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This metric shows when the system first enters an idle
state, which has already been recorded in the arch-
independent implementation of the idle thread.
Only x86 was doing this.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface and
has been renamed z_arch_kernel_init().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.
z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
k_cpu_idle() and k_cpu_atomic_idle() were being directly
implemented by arch code.
Rename these implementations to z_arch_cpu_idle() and
z_arch_cpu_atomic_idle(), and call them from new inline
function definitions in kernel.h.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().
References from test cases changed to k_is_in_isr().
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Global variables related to timing information have been
renamed to be prefixed with z_arch, with naming arranged
in increasing order of specificity.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
ACPI is predominantly x86, and only currently implemented on x86,
but it is employed on other architectures, so rename accordingly.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Simple naming change, since MULTIBOOT is clear enough by itself and
"namespacing" it to X86 is unnecessary and/or inappropriate.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
x86 has more complex memory maps than most Zephyr targets. A mechanism
is introduced here to manage such a map, and some methods are provided
to populate it (e.g., Multiboot).
The x86_info tool is extended to display memory map data.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Originally, the multiboot info struct was copied in the early assembly
language code. This code is moved to a C function in multiboot.c for
two reasons:
1. It's about to get more complicated, as we want the ability to use
a multiboot-provided memory map if available, and
2. this will faciliate its sharing between 32- and 64-bit subarches.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
Implement a simple ACPI parser with enough functionality to
enumerate CPU cores and determine their local APIC IDs.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
* In ARC, pop reg ==> sp=sp-4; *sp= b; The original codes have bug that
the save of ilink (st ilink [sp]) will crash the interruptted stack's
content. This commit fixes this bug and makes the codes easier to
understand
Signed-off-by: Wayne Ren <wei.ren@synopsys.com>
Various C and Assembly modules
make function calls to z_sys_trace_*. These merely call
corresponding functions sys_trace_*. This commit
is to simplify these by making direct function calls
to the sys_trace_* functions from these modules.
Subsequently, the z_sys_trace_* functions are removed.
Signed-off-by: Mrinal Sen <msen@oticon.com>
- Remove redundant inclusions in irq_init.c
- Remove comment about thread_abort function,
which does not belong in this file (probably
left-out during code refactoring)
- Include arm cmsis.h only under #ifdef CONFIG_ARM
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
This used to be part of the "restore always" set of registers because
__swap was expected to return a value. No longer required, so RAX is
moved to the volatile registers and we save a few cycles occasionally.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>
A space is allocated in the TSS for per-CPU variables. At present,
this is only a 'struct _cpu *' to find the _kernel CPU struct. The
locore routines are rewritten to find _current and _nested via this
pointer rather than referencing the _kernel global directly.
This is obviously in preparation for SMP support.
Signed-off-by: Charles E. Youse <charles.youse@intel.com>