Commit graph

180 commits

Author SHA1 Message Date
Daniel Leung 027a1c30cd x86: add support for memory mapped stack for threads
This adds the necessary bits to enable memory mapping thread
stacks on both x86 and x86_64. Note that currently these do
not support multi level mappings (e.g. demand paging and
running in virtual address space: qemu_x86/atom/virt board)
as the mapped stacks require actual physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-04-10 07:44:27 -04:00
Daniel Leung b69d2486fe kernel: rename Z_KERNEL_STACK_BUFFER to K_KERNEL_STACK_BUFFER
Simple rename to align the kernel naming scheme. This is being
used throughout the tree, especially in the architecture code.
As this is not a private API internal to kernel, prefix it
appropriately with K_.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-03-27 19:27:10 -04:00
Daniel Leung 2ab367d149 x86: ia32/gdbstub: remove dead code
There is logically dead code which will never run. So remove.

Fixes #66848

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-01-08 20:54:16 -06:00
Anas Nashif fb19d532ed arch: x86: z_x86_prep_c -> z_prep_c
Rename to use common naming for z_prep_c applied to all architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-12-11 18:23:52 -05:00
Dmitrii Golovanov f308299ca2 debug: gdbstub: kconfig: Add GDBSTUB_TRACE config option
Add GDBSTUB_TRACE config option to extend GDB backend debug logging
for remote commands received and to debug the GDB stub itself.

Signed-off-by: Dmitrii Golovanov <dmitrii.golovanov@intel.com>
2023-12-06 17:52:18 +00:00
Daniel Leung c972ef1a0f kernel: mm: move kernel mm functions under kernel includes
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.

Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-11-20 09:19:14 +01:00
Umar Nisar 31a6594212 drivers: loapic: add device tree support for loapic
As per #26393, Local APIC is using Kconfig based option for
the base address. This patch adds DTS binding support in the driver,
just like its conunter part I/O APIC.

Signed-off-by: Umar Nisar <umar.nisar@intel.com>
2023-09-01 16:36:18 +02:00
Anas Nashif 6baa622958 arch: move exc_handle.h under zephyr/arch/common
This header is private and included only in architecture code, no need for
it to be in the top of the public include directory.

Note: This might move to a more private location later. For now just
cleaning up the obvious issues.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2023-08-31 09:19:19 -04:00
Daniel Leung 16bd0c861f x86: rename shadow variables
Rename shadow variables found by -Wshadow.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2023-08-22 11:39:58 +02:00
Flavio Ceolin 596e77f562 x86: Early TLS initialization
Allow early boot code using thread local storage when
it is enabled.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-08-08 19:08:04 -04:00
Qipeng Zha 1c095239b7 arch: x86: ia32: don't create FP context for NULL thread
Enhance for cases when call z_float_enable() with NULL thread.

Signed-off-by: Dong Wang <dong.d.wang@intel.com>
Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
2023-05-24 12:41:06 -04:00
Qipeng Zha d963767369 arch: x86: add irq runtime statistics
Unlike tracing module mainly for debug usage, this is
to allow runtime profiling IRQ performance data, and
target to enable it in product release since platform
can choose to make it work with low weight protocol.

Enable this option and implement runtime_irq_stats()
in platform code, such as Intel ISH platform implement
with SHMI protocol to allow host profiling irq stats.

Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
2023-05-22 13:29:14 -04:00
Qipeng Zha c626bac016 arch: x86: fix SSE init issue when enable paging
With paging config, need to use physical address as
paging is not enabled here.

From IA manual, LDMXCSR instruction description is,
Loads the source operand into the MXCSR control/status
register, the source operand is a 32-bit memory location.

Signed-off-by: Qipeng Zha <qipeng.zha@intel.com>
2023-05-08 16:55:27 -04:00
Flavio Ceolin a6b4af4a93 ia32: irq: Remove unnecessary header
printk is not used anywhere in this file. Just remove it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2023-01-09 12:07:28 -05:00
Keith Packard e802d53900 x86: buf return from gdb_reg_readone is not a string
The buffer contents returned from arch_gdb_reg_readone is a counted array
of bytes, not a C string. Use memcpy instead of strcpy for the failure
return path to avoid compiler warning about missing NUL termination.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-10-26 12:00:04 +02:00
Johann Fischer 3c971307dc arch/kernel/soc/samples: use unsigned int for irq_lock()
irq_lock() returns an unsigned integer key.
Generated by spatch using semantic patch
scripts/coccinelle/irq_lock.cocci

Signed-off-by: Johann Fischer <johann.fischer@nordicsemi.no>
2022-07-14 14:37:13 -05:00
Gerard Marull-Paretas 4b91c2d79f asm: update files with <zephyr/...> include prefix
Assembler files were not migrated with the new <zephyr/...> prefix.
Note that the conversion has been scripted, refer to #45388 for more
details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-09 12:45:29 -04:00
Gerard Marull-Paretas 16811660ee arch: migrate includes to <zephyr/...>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-06 19:57:22 +02:00
Stephanos Ioannidis f9a3f02b86 x86: Initialise FPU regs during thread creation for eager FPU sharing
When "eager FPU sharing" mode is enabled, FPU registers must be
initialised at the time of thread creation because the floating-point
context is always active and no further FPU initialisation is performed
later.

Note that, in case of the "lazy FPU sharing" mode, floating-point
context is inactive by default and the FPU is initialised when the
first floating-point instruction is executed.

Refer to the issue #44902 for more details.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-04-18 17:23:48 -07:00
Tomasz Bursztyka f19f9db8df arch/x86: Expand cpu boot argument
In order to mitigate at runtime whether it booted on multiboot or EFI,
let's introduce a dedicated x86 cpu argument structure which holds the
type and the actual pointer delivered by the method (multiboot_info, or
efi_system_table)

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Nazar Kazakov 9713f0d47c everywhere: fix typos
Fix a lot of typos

Signed-off-by: Nazar Kazakov <nazar.kazakov.work@gmail.com>
2022-03-14 20:22:24 -04:00
Daniel Leung 25f87aac87 x86: remove @return doc for void functions
For functions returning nothing, there is no need to document
with @return, as Doxgen complains about "documented empty
return type of ...".

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-01-12 16:02:16 -05:00
Daniel Leung 650a629b08 debug: gdbstub: remove start argument from z_gdb_main_loop()
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Daniel Leung e1180c8cee x86: gdbstub: add arch-specific funcs to read/write registers
This adds some architecture-specific functions to read/write
registers for the GDB stub. This is in preparation for the actual
introduction of these functions in the core GDB stub code to
avoid breaking the build in between commits.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-11-30 15:24:00 -05:00
Flavio Ceolin 7dd4297214 pm: Remove unused parameter
The number of ticks on z_pm_save_idle_exit is not used and there is
no need to have it.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-11-17 11:15:49 -05:00
Daniel Leung 30e5968d34 x86: don't clear BSS if not in physical memory at boot
If the BSS section is not present in physical memory at boot,
do not zero the section, or else page faults would occur.
The zeroing of BSS will be done once the paging mechanism
has been initialized.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-08-26 21:16:22 -04:00
Maksim Masalski 466c5d9dea arch: x86: core: remove order eval of 'z_x86_check_stack_bounds' args
The code depends on the order of evaluation 'z_x86_check_stack_bounds'
function arguments.
The solution is to assign these values to variables and then pass
them in.
The fix would be to make 2 local variables, assign them the values
of _df_esf.esp and .cs, and then call the function with those 2 local
variables as arguments.
Found as a coding guideline violation (MISRA R13.2) by static
coding scanning tool.

Change "int reason" to "unsigned reason" like in other functions.

Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
2021-06-23 07:10:18 -04:00
Daniel Leung 2c2d313cb9 x86: ia32: mark symbols for boot and pinned regions
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung 512cb905d1 x86: ia32/linker: add boot and pinned sections
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung af49ec0277 linker: remove TEXT_START macro
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung 37672958ac x86: mmu: relax KERNEL_VM_OFFSET == SRAM_OFFSET
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-05 19:42:25 -04:00
Anas Nashif 25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Katsuhiro Suzuki 19db485737 kernel: arch: use ENOTSUP instead of ENOSYS in k_float_disable()
This patch replaces ENOSYS into ENOTSUP to keep consistency with
the return value specification of k_float_enable().

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Katsuhiro Suzuki 59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Daniel Leung c650721a0f x86: ia32: use virtual address for interrupt stack at boot
After page table is load, we should be executing in virtual
address space. Therefore we need to set ESP to the virtual
address of interrupt stack for the boot process.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 9109fbb1a2 x86: ia32: load GDT in virtual memory after loading page table
This reverts commit d40e8ede8e.

This fixes triple faults after wiping the identity mapping of
physical memory when running entering userspace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Andrew Boie 348d1315d2 x86: 32-bit: restore virtual linking capability
This reverts commit 7d32e9f9a5.

We now allow the kernel to be linked virtually. This patch:

- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
  in boot page tables to facilitate instruction pointer
  transition, with logic to clean this up after completed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung d39012a590 x86: use Z_MEM_*_ADDR instead of Z_X86_*_ADDR
With the introduction of Z_MEM_*_ADDR for physical<->virtual
address translation, there is no need to have x86 specific
versions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Daniel Leung 8cfdd91d54 x86: ia32/fatal: be explicit on pointer math with _df_tss.cr3
For some unknown reason, the pagetable address for _df_tss.cr3
did not get translated from virtual to physical. However,
the translation is done if the pointer to pagetable is obtained
through reference to the first array element (instead of simply
through the name of array). Without CR3 pointing to the page
table via physical address, double fault does not work. So
fixing this by being explicit with the page table pointer.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung 9ce77abf23 x86: ia32: jump to virtual address before calling z_x86_prep_c
We have been having the assumption that the physical memory
is identity-mapped to virtual address space. However, with
the ability to set CONFIG_KERNEL_VM_BASE separately from
CONFIG_SRAM_BASE_ADDRESS, the assumption is no longer valid.
This changes the boot code in x86 32-bit, so that once
the page table is loaded, we can proceed with executing in
the virtual address space. So do a long jump to virtual
address just before calling z_x86_prep_c. From this point on,
code execution is in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung a1afe9be5e x86: ia32: do virtual address translation at boot
This adds virtual address translation to a few variables
used in crt0.S. This is needed as they are linked at
virtual addresses but before page table is loaded,
they are not available at virtual addresses and must be
referred via physical addresses.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-03 20:10:22 -05:00
Daniel Leung c0ee8c4a43 x86: use z_bss_zero and z_data_copy
Instead of doing these in assembly, use the common z_bss_zero()
and z_data_copy() C functions instead. This simplifies code
a bit and we won't miss any additions to these two functions
(if any) under x86 in the future (as x86_64 was actually not
clearing gcov bss area).

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Daniel Leung dd98de880a x86: move calling z_loapic_enable into z_x86_prep_c
This moves calling z_loapic_enable() from crt0.S into
z_x86_prep_c(). This is done so we can move BSS clearing
and data section copying inside z_x86_prep_c() as
these are needed before calling z_loapic_enable().

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-21 22:28:33 -05:00
Tomasz Bursztyka 5e4e0298e9 arch/x86: Generalize cache manipulation functions
We assume that all x86 CPUs do have clflush instructions.
And the cache line size is now provided through DTS.

So detecting clflush instruction as well as the cache line size is no
longer required at runtime and thus removed.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2021-02-15 09:43:30 -05:00
Daniel Leung ce44048d46 x86: rename CONFIG_SSE* to CONFIG_X86_SSE*
This adds X86 keyword to the kconfigs to indicate these are
for x86. The old options are still there marked as
deprecated.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-15 08:21:15 -05:00
Daniel Leung d3218ca515 debug: coredump: remove z_ prefix for stuff used outside subsys
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-21 22:08:59 -05:00
Daniel Leung ff720cd9b3 x86: enable soft float support for Zephyr SDK
This adds the correct compiler and linker flags to
support software floating point operations. The flags
need to be added to TOOLCHAIN_*_FLAGS for GCC to find
the correct library (when calling GCC with
--print-libgcc-file-name).

Note that software floating point needs to be turned
on for Newlib. This is due to Newlib having floating
point numbers in its various printf() functions which
results in floating point instructions being emitted
from toolchain. These instructions are placed very
early in the functions which results in them being
executed even though the format string contains
no floating point conversions. Without using CONFIG_FPU
to enable hardware floating point support, any calls to
printf() like functions will result in exceptions
complaining FPU is not available. Although forcing
CONFIG_FPU=y with newlib is an option, and because
the OS doesn't know which threads would call these
printf() functions, Zephyr has to assume all threads
are using FPU and thus incurring performance penalty as
every context switching now needs to save FPU registers.
A compromise here is to use soft float instead. Newlib
with soft float enabled does not have floating point
instructions and yet can still support its printf()
like functions.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-20 16:45:31 -05:00
Carlo Caione dc4cc5565b x86: cache: Use new cache APIs
Add an helper to correctly use the new cache APIs.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Flavio Ceolin 28cf88183a x86: power: Remove dead code
X86 currently has no support for deep sleep states, just removing this
dead code.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-08 06:49:43 -05:00
Daniel Leung 0aa61793de x86: remove custom switch to main thread function
The original idea of using a custom switch to main thread
function is to make sure the buffer to save floating point
registers are aligned correctly or else exception would be
raised when saving/restoring those registers. Since
the struct of the buffer is defined with alignment hint
to toolchain, the alignment will be enforced by toolchain
as long as the k_thread struct variable is a dedicated,
declared variable. So there is no need for the custom
switch to main thread function anymore.

This also allows the stack usage calculation of
the interrupt stack to function properly as the end of
the interrupt stack is not being used for the dummy
thread anymore.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-04 16:59:59 -08:00