The warning below appears once -Waddress-of-packed-mem is enabled:
/home/carles/src/zephyr/zephyr/arch/x86/core/acpi.c: In function
'z_acpi_find_table':
/home/carles/src/zephyr/zephyr/arch/x86/core/acpi.c:190:24: warning:
taking address of packed member of 'struct acpi_xsdt' may result in an
unaligned pointer value [-Waddress-of-packed-member]
190 | for (uint64_t *tp = &xsdt->table_ptrs[0]; tp < end; tp++) {
To avoid the warning, use an intermediate void * variable.
More info in #16587.
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
Storing the state where this is the first GDB break can be done
in the main GDB stub code. There is no need to store the state
in architecture layer.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds some architecture-specific functions to read/write
registers for the GDB stub. This is in preparation for the actual
introduction of these functions in the core GDB stub code to
avoid breaking the build in between commits.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This changes the arch_mem_domain_*() functions to return errors.
This allows the callers a chance to recover if needed.
Note that:
() For assertions where it can bail out early without side
effects, these are converted to CHECKIF(). (Usually means
that updating of page tables or translation tables has not
been started yet.)
() Other assertions are retained to signal fatal errors during
development.
() The additional CHECKIF() are structured so that it will bail
early if possible. If errors are encountered inside a loop,
it will still continue with the loop so it works as before
this changes with assertions disabled.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
replace with version.parse from packaging module.
prevent this warning message:
DeprecationWarning: The distutils package is deprecated
and slated for removal in Python 3.12. Use setuptools or
check PEP 632 for potential alternatives
Signed-off-by: Julien Massot <julien.massot@iot.bzh>
Call into z_thread_usage_stop() before ISR entry to avoid including
interrupt handling totals in thread usage stats.
This has to go into the assembly immediately before the callback-based
dispatch. Note that the dispatch code was putting the vector number
in RCX, which was unfortunate as that's a caller-saved register.
Would be nice to clean this up in the future so it lives in a
preserved register but it's mildly complicated to make work with the
way we do the stack layout right now.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds arch_float_enable() and arch_float_disable() to x86-64.
As x86-64 always has FP/SSE enabled, these operations are basically
no-ops. These are added just for the completeness of arch interface.
Fixes#38022
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.
The symbols _image_text_start and _image_text_end sometimes includes
linker/kobject-text.ld. This mean there must be both the regular
__text_start and __text_end symbols for the pure text section, as well
as <group>_start and <group>_end symbols.
The symbols describing the text region which covers more than just the
text section itself will thus be changed to:
_image_text_start -> __text_region_start
_image_text_end -> __text_region_end
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
Cleanup and preparation commit for linker script generator.
Zephyr linker scripts provides start and end symbols for each larger
areas in the linker script.
The symbols _image_rom_start and _image_rom_end corresponds to the group
ROMABLE_REGION defined in the ld linker scripts.
The symbols _image_rodata_start and _image_rodata_end is not placed as
independent group but covers common-rom.ld, thread-local-storage.ld,
kobject-rom.ld and snippets-rodata.ld.
This commit align those names and prepares for generation of groups in
linker scripts.
The symbols describing the ROMABLE_REGION will be renamed to:
_image_rom_start -> __rom_region_start
_image_rom_end -> __rom_region_end
The rodata will also use the group symbol notation as:
_image_rodata_start -> __rodata_region_start
_image_rodata_end -> __rodata_region_end
Signed-off-by: Torsten Rasmussen <Torsten.Rasmussen@nordicsemi.no>
This function should be pinned in memory instead of simply
putting it in the boot section, as this function will be
used when new threads are created at runtime.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If generic section is not present at boot, the thread stack
may not be in physical memory. Unconditionally page in the stack
instead of relying on page fault to speed up a little bit
on starting the thread.
Also, this prevents a double fault during thread setup when
setting up stack permission in z_x86_userspace_enter().
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When converting the address and size arguments for extra mappings,
the script assumes they are always base 16. This is not always
the case. So let Python's own int() decides how to interpret
the values as it supports "0x" prefix also.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
With demand paging, it is possible for data pages to not be
present in physical memory. The gen_mmu.py script is updated
so that, if so desired, the generic sections are marked
non-present so the paging mechanism can bring them in
if needed.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
If the BSS section is not present in physical memory at boot,
do not zero the section, or else page faults would occur.
The zeroing of BSS will be done once the paging mechanism
has been initialized.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
add .S file extension suffix into CMAKE_ASM_SOURCE_FILE_EXTENSIONS,
because clang from OneApi can't recongnize them as asm files on
windows, then they won't be added into build system.
Signed-off-by: Chen Peng1 <peng1.chen@intel.com>
Correct the wrong operand of clflush instruction. The old operand
points to a location inside stack and doesn't work. The new one
works well by taking linux kernel code as reference.
End address instead of size should get round up
Add Kconfig option to disable the usage of mfence intruction for
SoC that has clfulsh but no mfence supported.
Signed-off-by: Dong Wang <dong.d.wang@intel.com>
The code depends on the order of evaluation 'z_x86_check_stack_bounds'
function arguments.
The solution is to assign these values to variables and then pass
them in.
The fix would be to make 2 local variables, assign them the values
of _df_esf.esp and .cs, and then call the function with those 2 local
variables as arguments.
Found as a coding guideline violation (MISRA R13.2) by static
coding scanning tool.
Change "int reason" to "unsigned reason" like in other functions.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
According to the Zephyr Coding Guideline all switch statements
shall be well-formed.
Add a comment to the empty default case.
Add a LOG_ERR to the default case.
Found as a coding guideline violation (MISRA R16.1) by static
coding scanning tool.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
commit 5e9c583c24 ("arch/x86_64: Terrible, awful hackery to
bootstrap entry") introduced a terrible trick which begins execution
at the bottom of .locore with a jump, which then gets replaced with
NOP instructions for the benefit of 16 bit real mode startup of the
other CPUs later on.
But I forgot that EFI enters in 64 bit code natively, and so never
hits that path. And moving it to the 64 bit setup code doesn't work,
because at that point when we are NOT loaded from EFI, we already have
the Zephyr page tables in place that disallow writes to .locore.
So do it in the EFI loader, which while sort of a weird place, has the
benefit of being in C instead of assembly.
Really all this code needs to go away. A proper x86 entry
architecture would enter somewhere in the main blob, and .locore
should be a tiny stub we copy in at runtime.
Fixes#36107
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Since physical memory is no longer wholly identity mapped,
it is not needed to set the VM size to be larger than
physical memory size. The VM size was 2GB (max physical
memory size of x86 boards) + 1GB (for memory mappings).
So simply shrink the size to 1GB, as the kernel size is
small and we still have a large chunk of space to do
memory mapping.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
With ACPI doing dynamic memory mapping and unmapping
to access ACPI tables, there is no need to identity
map all the physical memory anymore. So remove
the "select" statement in ACPI kconfig.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Instead of accessing ACPI tables through physical address, do
memory mapping/unmapping so they can be accessed via virtual
addresses. This allows us to avoid identity mapping all
physical memory, and thus no need for a page table large enough
to map everything.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This limits the search for Extended BIOS Data Area (EBDA) to
0x80000 to 0x100000 as this is usually the area for it.
If 0000:040e has an address not pointing to this area, it is
probably an invalid address, and should not be de-referenced
to avoid segfault.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Most arch's CMakeLists.txt contain rules to add compiler and linker
flags for coverage if CONFIG_COVERAGE is enabled, but 4 of them were
missing this.
Instead, set the coverage flags in arch/common/CMakeLists.txt which
affects all archs.
Signed-off-by: Jeremy Bettis <jbettis@chromium.org>
Essential type of RHS operand (64 bit) is wider than essential
type of composite expression in LHS operand (32 bit).
LHS entry_val is 32 bit, and RHS (phys+offset) is 64 bit.
Cast RHS composite expression to the (pentry_t) type.
Found as a coding guideline violation (MISRA R10.7) by static
coding scanning tool.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
The 16 bit bootstrap code for SMP CPUs was using the 286-era "lmsw"
instruction (load machine status word) to set the protected bit in CR0
(which is the modern evolution of the same register), presumably
because this is 16 bit code and we can't move a dword into CR0.
But that's wrong, because the full instruction set *is* available in
real mode on a 386, you just have to use a operand size prefix to get
to it, which the assembler emits for you automatically when you use
the .code16 directive.
Write this conventionally and use modern (e.g. 1986-era) instructions.
It also has the advantage of not confusing much more modern
hypervisors like ACRN by issuing instructions they (and I!) never knew
existed.
Fixes#35076
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Because of a historical misunderstanding, by default the ACRN
hypervisor wants to load Zephyr at address 0x1000 and enter the binary
at that same address. This entry point corresponds to the __start
symbol of the build they were given, which is a 1-cpu non-SMP
configuration. Unfortunately, when we build with
CONFIG_MP_NUM_CPUS=1, the code in locore.S #if's out the 16 bit entry
point for the auxiliary CPUs at the start of the section. So in the
build ACRN received, the start address happened to be 0x7000, the same
address we need to launch the AP processors from.
That's right: under ACRN, the SAME ADDRESS used to enter the OS in 32
bit mode needs to be used later to boot CPUs running in 16 bit real
mode!
The solution, such as it is, is to put a 32 bit jump at the entry
address which hops to the 32 bit OS entry code, and then scribble NOP
instructions over that jump once we get there so that the next time we
reach that address (in real mode) we fall through to the correct
entry.
This patch should be considered a temporary workaround. While it
works on all x86 hardware, it's not really needed. A much better
solution would be to eliminate the locore linker region entirely
(which causes other headaches) and enter the Zephyr binary in a 32 bit
address somewhere in the contiguous high memory area. All that locore
is needed for is the 16 bit bootstrap code for SMP processors, which
is ~6 instructions and can be copied in from the kernel at runtime.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
From the point of checking the info pointer value all code in the
z_multiboot_init() function depends on it being non-NULL. Therefore,
simply return from the function if it's NULL.
Fixes#33084
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
When loaded via EFI, we obviously don't have a multiboot info pointer
available (we might have an EFI system table, but zefi doesn't pass
that through yet). Don't try to parse the "whatever garbage was in
%rbp" as a multiboot table.
The configuration is a little clumsy, as strictly our EFI kconfig just
says we're "building for" EFI but not that we'll boot that way. And
tests like arch/x86/info are trying to set CONFIG_MULTIBOOT=n
unconditionally, when it really should be something they detect from
devicetree or wherever.
Fixes#33545
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This implements arch_page_phys_get() to translate mapped
virtual addresses back to physical addresses.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In z_mem_manage_init(), z_free_page_count is only manipulated
after all reserved pages are marked, and will reflect
the actual number of page frames being added to the free page
frame list. Manipulating z_free_page_count before this is
going to mess up the accounting, so remove the code to
decrement z_free_page_count in arch_reserved_pages_update()
under x86.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
This adds the bits to the gen_mmu.py script so that extra mappings
can be added with caching disabled. This is useful for mapping
MMIO regions where caching is not desired.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is a possibility that the TSC frequency calculation
is divided by zero. So this fixes the issue by repeatedly
trying to get the delta clock cycles and delta TSC cycles
until they both are not zero.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Rephrasing away from ain't, which is informal, uncommon, and can
be viewed as substandard or 'slang'.
Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
Reboot functionality has nothing to do with PM, so move it out to the
subsys/os folder.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
The variable tsc_freq is not accessible in user thread
and is thus preventing user threads to convert cycles to ns.
So make tsc_freq available globally in default memory
domain so conversion is possible.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>