Commit graph

1061 commits

Author SHA1 Message Date
Tomasz Bursztyka
88bac5d0b5 arch/x86: Implement the IRQ allocation and usage interfaces for intel 64
This is the only architecture user for this at the moment.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-12-22 12:16:52 +01:00
Tomasz Bursztyka
c76651b9ab arch/x86: Do not call irq controller on dedicated irq/vector function
MSI/MSI-x interrupt do not need any interrupt controller handling
(ioapic/loapic).

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-12-22 12:16:52 +01:00
Carles Cufi
4f64ae383d x86: acpi: Fix address-of-packed-mem warning
The warning below appears once -Waddress-of-packed-mem is enabled:

/home/carles/src/zephyr/zephyr/arch/x86/core/acpi.c: In function
'z_acpi_find_table':
/home/carles/src/zephyr/zephyr/arch/x86/core/acpi.c:190:24: warning:
taking address of packed member of 'struct acpi_xsdt' may result in an
unaligned pointer value [-Waddress-of-packed-member]
  190 |    for (uint64_t *tp = &xsdt->table_ptrs[0]; tp < end; tp++) {

To avoid the warning, use an intermediate void * variable.

More info in #16587.

Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2021-12-10 14:08:59 +01:00
Daniel Leung
650a629b08 debug: gdbstub: remove start argument from z_gdb_main_loop()
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Daniel Leung
e1180c8cee x86: gdbstub: add arch-specific funcs to read/write registers
This adds some architecture-specific functions to read/write
registers for the GDB stub. This is in preparation for the actual
introduction of these functions in the core GDB stub code to
avoid breaking the build in between commits.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Daniel Leung
1cd7cccbb1 kernel: mem_domain: arch_mem_domain functions to return errors
This changes the arch_mem_domain_*() functions to return errors.
This allows the callers a chance to recover if needed.

Note that:
() For assertions where it can bail out early without side
   effects, these are converted to CHECKIF(). (Usually means
   that updating of page tables or translation tables has not
   been started yet.)
() Other assertions are retained to signal fatal errors during
   development.
() The additional CHECKIF() are structured so that it will bail
   early if possible. If errors are encountered inside a loop,
   it will still continue with the loop so it works as before
   this changes with assertions disabled.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-22 12:45:22 -05:00
Flavio Ceolin
7dd4297214 pm: Remove unused parameter
The number of ticks on z_pm_save_idle_exit is not used and there is
no need to have it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-17 11:15:49 -05:00
Andy Ross
1238410914 arch/x86_64: Add hook for CONFIG_SCHED_THREAD_USAGE accounting in ISRs
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.

This has to go into the assembly immediately before the callback-based
dispatch.  Note that the dispatch code was putting the vector number
in RCX, which was unfortunate as that's a caller-saved register.
Would be nice to clean this up in the future so it lives in a
preserved register but it's mildly complicated to make work with the
way we do the stack layout right now.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-11-08 21:32:20 -05:00
Daniel Leung
d33017b458 x86: x86-64: add arch_float_en-/dis-able() functions
This adds arch_float_enable() and arch_float_disable() to x86-64.
As x86-64 always has FP/SSE enabled, these operations are basically
no-ops. These are added just for the completeness of arch interface.

Fixes #38022

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-09-03 10:00:02 -04:00
Torsten Rasmussen
c6aded2dcb linker: align _image_rodata and _image_rom start/end/size linker symbols
Cleanup and preparation commit for linker script generator.

Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.

The symbols _image_rom_start and _image_rom_end corresponds to the group
ROMABLE_REGION defined in the ld linker scripts.

The symbols _image_rodata_start and _image_rodata_end is not placed as
independent group but covers common-rom.ld, thread-local-storage.ld,
kobject-rom.ld and snippets-rodata.ld.

This commit align those names and prepares for generation of groups in
linker scripts.

The symbols describing the ROMABLE_REGION will be renamed to:
_image_rom_start -> __rom_region_start
_image_rom_end   -> __rom_region_end

The rodata will also use the group symbol notation as:
_image_rodata_start -> __rodata_region_start
_image_rodata_end   -> __rodata_region_end

Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
2021-08-28 08:48:03 -04:00
Daniel Leung
c2a01af003 x86: pin z_x86_set_stack_guard()
This function should be pinned in memory instead of simply
putting it in the boot section, as this function will be
used when new threads are created at runtime.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung
7605619c1e x86: userspace: page in stack before starting user thread
If generic section is not present at boot, the thread stack
may not be in physical memory. Unconditionally page in the stack
instead of relying on page fault to speed up a little bit
on starting the thread.

Also, this prevents a double fault during thread setup when
setting up stack permission in z_x86_userspace_enter().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Daniel Leung
30e5968d34 x86: don't clear BSS if not in physical memory at boot
If the BSS section is not present in physical memory at boot,
do not zero the section, or else page faults would occur.
The zeroing of BSS will be done once the paging mechanism
has been initialized.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Chen Peng1
fbe13b7bc2 cmake: oneApi: add oneApi support on windows.
add .S file extension suffix into CMAKE_ASM_SOURCE_FILE_EXTENSIONS,
because clang from OneApi can't recongnize them as asm files on
windows, then they won't be added into build system.

Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
2021-07-27 07:20:12 -04:00
Dong Wang
a6800cefb1 x86/cache: fix issues in arch dcache flush function
Correct the wrong operand of clflush instruction. The old operand
points to a location inside stack and doesn't work. The new one
works well by taking linux kernel code as reference.

End address instead of size should get round up

Add Kconfig option to disable the usage of mfence intruction for
SoC that has clfulsh but no mfence supported.

Signed-off-by: Dong Wang <dong.d.wang@intel.com>
2021-07-23 16:22:07 -04:00
Maksim Masalski
466c5d9dea arch: x86: core: remove order eval of 'z_x86_check_stack_bounds' args
The code depends on the order of evaluation 'z_x86_check_stack_bounds'
function arguments.
The solution is to assign these values to variables and then pass
them in.
The fix would be to make 2 local variables, assign them the values
of _df_esf.esp and .cs, and then call the function with those 2 local
variables as arguments.
Found as a coding guideline violation (MISRA R13.2) by static
coding scanning tool.

Change "int reason" to "unsigned reason" like in other functions.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-23 07:10:18 -04:00
Maksim Masalski
cbfd33f2ec arch: add comments to empty default case, add default LOG_ERR
According to the Zephyr Coding Guideline all switch statements
shall be well-formed.
Add a comment to the empty default case.
Add a LOG_ERR to the default case.

Found as a coding guideline violation (MISRA R16.1) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-22 08:23:43 -04:00
Daniel Leung
454522430f x86: acpi: use memory mapping/unmapping to access ACPI tables
Instead of accessing ACPI tables through physical address, do
memory mapping/unmapping so they can be accessed via virtual
addresses. This allows us to avoid identity mapping all
physical memory, and thus no need for a page table large enough
to map everything.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Daniel Leung
a3e817700f x86: acpi: limit search on where EBDA can be
This limits the search for Extended BIOS Data Area (EBDA) to
0x80000 to 0x100000 as this is usually the area for it.
If 0000:040e has an address not pointing to this area, it is
probably an invalid address, and should not be de-referenced
to avoid segfault.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-06-11 16:12:52 +02:00
Jeremy Bettis
2de4a902de cmake: Support coverage flags on all archs
Most arch's CMakeLists.txt contain rules to add compiler and linker
flags for coverage if CONFIG_COVERAGE is enabled, but 4 of them were
missing this.

Instead, set the coverage flags in arch/common/CMakeLists.txt which
affects all archs.

Signed-off-by: Jeremy Bettis <jbettis@chromium.org>
2021-06-10 18:01:36 -04:00
Maksim Masalski
e96df40004 arch: x86: cast to the same size composite expression
Essential type of RHS operand (64 bit) is wider than essential
type of composite expression in LHS operand (32 bit).
LHS entry_val is 32 bit, and RHS (phys+offset) is 64 bit.
Cast RHS composite expression to the (pentry_t) type.

Found as a coding guideline violation (MISRA R10.7) by static
coding scanning tool.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-10 17:17:23 -04:00
Andy Ross
9cb8dcbf84 arch/x86_64: Use modern CR0 assembly
The 16 bit bootstrap code for SMP CPUs was using the 286-era "lmsw"
instruction (load machine status word) to set the protected bit in CR0
(which is the modern evolution of the same register), presumably
because this is 16 bit code and we can't move a dword into CR0.

But that's wrong, because the full instruction set *is* available in
real mode on a 386, you just have to use a operand size prefix to get
to it, which the assembler emits for you automatically when you use
the .code16 directive.

Write this conventionally and use modern (e.g. 1986-era) instructions.
It also has the advantage of not confusing much more modern
hypervisors like ACRN by issuing instructions they (and I!) never knew
existed.

Fixes #35076

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-06-03 20:07:50 -05:00
Andy Ross
5e9c583c24 arch/x86_64: Terrible, awful hackery to bootstrap entry
Because of a historical misunderstanding, by default the ACRN
hypervisor wants to load Zephyr at address 0x1000 and enter the binary
at that same address.  This entry point corresponds to the __start
symbol of the build they were given, which is a 1-cpu non-SMP
configuration.  Unfortunately, when we build with
CONFIG_MP_NUM_CPUS=1, the code in locore.S #if's out the 16 bit entry
point for the auxiliary CPUs at the start of the section.  So in the
build ACRN received, the start address happened to be 0x7000, the same
address we need to launch the AP processors from.

That's right: under ACRN, the SAME ADDRESS used to enter the OS in 32
bit mode needs to be used later to boot CPUs running in 16 bit real
mode!

The solution, such as it is, is to put a 32 bit jump at the entry
address which hops to the 32 bit OS entry code, and then scribble NOP
instructions over that jump once we get there so that the next time we
reach that address (in real mode) we fall through to the correct
entry.

This patch should be considered a temporary workaround.  While it
works on all x86 hardware, it's not really needed.  A much better
solution would be to eliminate the locore linker region entirely
(which causes other headaches) and enter the Zephyr binary in a 32 bit
address somewhere in the contiguous high memory area.  All that locore
is needed for is the 16 bit bootstrap code for SMP processors, which
is ~6 instructions and can be copied in from the kernel at runtime.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-06-03 20:07:50 -05:00
Johan Hedberg
8341a136d6 x86: multiboot: Fix NULL pointer dereferences
From the point of checking the info pointer value all code in the
z_multiboot_init() function depends on it being non-NULL. Therefore,
simply return from the function if it's NULL.

Fixes #33084

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2021-05-25 13:37:19 -04:00
Andy Ross
41e885947e arch/x86: Correct multiboot interpretation when building for EFI
When loaded via EFI, we obviously don't have a multiboot info pointer
available (we might have an EFI system table, but zefi doesn't pass
that through yet).  Don't try to parse the "whatever garbage was in
%rbp" as a multiboot table.

The configuration is a little clumsy, as strictly our EFI kconfig just
says we're "building for" EFI but not that we'll boot that way.  And
tests like arch/x86/info are trying to set CONFIG_MULTIBOOT=n
unconditionally, when it really should be something they detect from
devicetree or wherever.

Fixes #33545

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-15 15:30:02 -04:00
Daniel Leung
2c2d313cb9 x86: ia32: mark symbols for boot and pinned regions
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
512cb905d1 x86: ia32/linker: add boot and pinned sections
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung
af49ec0277 linker: remove TEXT_START macro
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Carlo Caione
f000695243 cache: Rename sys_{dcache,icache}_* to sys_{data,instr}_cache_*
To have a common prefix.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Daniel Leung
18aad13d76 x86: mmu: implement arch_page_phys_get()
This implements arch_page_phys_get() to translate mapped
virtual addresses back to physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
786cf641dc x86: mmu: implement arch_mem_unmap()
This implements arch_mem_unmap() as counterpart to
arch_mem_map().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
c481fd412e x86: mmu: don't decrement z_free_page_count in reserving code
In z_mem_manage_init(), z_free_page_count is only manipulated
after all reserved pages are marked, and will reflect
the actual number of page frames being added to the free page
frame list. Manipulating z_free_page_count before this is
going to mess up the accounting, so remove the code to
decrement z_free_page_count in arch_reserved_pages_update()
under x86.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Daniel Leung
37672958ac x86: mmu: relax KERNEL_VM_OFFSET == SRAM_OFFSET
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-05 19:42:25 -04:00
Jennifer Williams
ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00
Jennifer Williams
3e28a570c2 arch: x86: core: pcie: rephrase use of ain't
Rephrasing away from ain't, which is informal, uncommon, and can
be viewed as substandard or 'slang'.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-04-29 07:15:50 -04:00
Gerard Marull-Paretas
f163bdb280 power: move reboot functionality to os lib
Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-28 20:34:00 -04:00
Gerard Marull-Paretas
6c7c9e2b99 arch: x86: remove usage of device_pm_control_nop
If device PM is not implemented just use NULL.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2021-04-27 16:28:49 -04:00
Flavio Ceolin
03544f0b77 arch: x86: Fix 10.4 violations
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-10 09:59:37 -04:00
Flavio Ceolin
85b2bd63c1 arch: x86: Fix 14.4 guideline violation
The controlling expression of an if statement has to be an
essentially boolean type.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-04-06 10:25:24 -04:00
Anas Nashif
0630452890 x86: make tests of a value against zero should be made explicit
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif
25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Daniel Leung
4b477a9864 x86: mmu: allow copying page directory entries with large pages
This changes the assert when a large page is encountered to
copying the page directory entry to the new page directory.
This is needed when a large page entry is generated by
gen_mmu.py. Note that this still asserts when there are entries
of large page at higher level.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Daniel Leung
3ebcd8307e x86: mmu: add kconfig CONFIG_X86_EXTRA_PAGE_TABLE_PAGES
The whole page table is pre-allocated at build time and is
dependent on the range of address space. This kconfig allows
reserving extra pages (of size CONFIG_MMU_PAGE_SIZE) to
the page table so that gen_mmu.py can make use of these
extra pages.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-28 08:30:06 -04:00
Katsuhiro Suzuki
19db485737 kernel: arch: use ENOTSUP instead of ENOSYS in k_float_disable()
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Katsuhiro Suzuki
59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Kumar Gala
7d35a8c93d kernel: remove arch_mem_domain_destroy
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed.  So remove
arch_mem_domain_destroy as well.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2021-03-18 16:30:47 +01:00
Daniel Leung
c650721a0f x86: ia32: use virtual address for interrupt stack at boot
After page table is load, we should be executing in virtual
address space. Therefore we need to set ESP to the virtual
address of interrupt stack for the boot process.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
9109fbb1a2 x86: ia32: load GDT in virtual memory after loading page table
This reverts commit d40e8ede8e.

This fixes triple faults after wiping the identity mapping of
physical memory when running entering userspace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Andrew Boie
348d1315d2 x86: 32-bit: restore virtual linking capability
This reverts commit 7d32e9f9a5.

We now allow the kernel to be linked virtually. This patch:

- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
  in boot page tables to facilitate instruction pointer
  transition, with logic to clean this up after completed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung
03b413712a x86: gen_mmu: double map physical/virtual memory at top level
This reuses the page directory pointer table (PAE=y) or page
directory (PAE=n) to point to next level page directory table
(PAE=y) or page tables (PAE=n) to identity map the physical
memory. This gets rid of the extra memory needed to host
the extra mappings which are only used at boot. Following
patches will have code to actual unmap physical memory
during the boot process, so this avoids some wasting of
memory.

Since no extra memory needs to be reserved, this also reverts
commit ee3d345c09
("x86: mmu: reserve more space for page table if linking in virt").

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00