The ARConnect Inter-core Debug Unit (ICD) provides additional
debug assist features in multi-core scenarios. It's useful to halt
other cores when one core is halted.
Before we program ICD in master core(core 0) initial stage, add
all cores to mask. so we need to make sure other slave cores have
launched and in running mode before we enable ICD in master core.
If we launch master core first, then launch slave cores by master
core conditionally, in this scenario, it's not OK.
Let's update arc connect debug (ARConnect ICD) select mask
when new slave core come online by slave core self, instead of
use hardcoded select mask.
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
The code depends on the order of evaluation 'z_x86_check_stack_bounds'
function arguments.
The solution is to assign these values to variables and then pass
them in.
The fix would be to make 2 local variables, assign them the values
of _df_esf.esp and .cs, and then call the function with those 2 local
variables as arguments.
Found as a coding guideline violation (MISRA R13.2) by static
coding scanning tool.
Change "int reason" to "unsigned reason" like in other functions.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
According to the Zephyr Coding Guideline all switch statements
shall be well-formed.
Add a comment to the empty default case.
Add a LOG_ERR to the default case.
Found as a coding guideline violation (MISRA R16.1) by static
coding scanning tool.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
commit 5e9c583c24 ("arch/x86_64: Terrible, awful hackery to
bootstrap entry") introduced a terrible trick which begins execution
at the bottom of .locore with a jump, which then gets replaced with
NOP instructions for the benefit of 16 bit real mode startup of the
other CPUs later on.
But I forgot that EFI enters in 64 bit code natively, and so never
hits that path. And moving it to the 64 bit setup code doesn't work,
because at that point when we are NOT loaded from EFI, we already have
the Zephyr page tables in place that disallow writes to .locore.
So do it in the EFI loader, which while sort of a weird place, has the
benefit of being in C instead of assembly.
Really all this code needs to go away. A proper x86 entry
architecture would enter somewhere in the main blob, and .locore
should be a tiny stub we copy in at runtime.
Fixes#36107
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
The stack frame size, used for context switch, is rounded up to 16-bytes
alignment. Therefore, we need round down the pointer of top of the
pre-populated stack frame so that the preserved stack frame size is also
rounded up to 16-bytes alignment.
Fixes#29535
Signed-off-by: Shih-Wei Teng <swteng@andestech.com>
Since physical memory is no longer wholly identity mapped,
it is not needed to set the VM size to be larger than
physical memory size. The VM size was 2GB (max physical
memory size of x86 boards) + 1GB (for memory mappings).
So simply shrink the size to 1GB, as the kernel size is
small and we still have a large chunk of space to do
memory mapping.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
With ACPI doing dynamic memory mapping and unmapping
to access ACPI tables, there is no need to identity
map all the physical memory anymore. So remove
the "select" statement in ACPI kconfig.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Instead of accessing ACPI tables through physical address, do
memory mapping/unmapping so they can be accessed via virtual
addresses. This allows us to avoid identity mapping all
physical memory, and thus no need for a page table large enough
to map everything.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This limits the search for Extended BIOS Data Area (EBDA) to
0x80000 to 0x100000 as this is usually the area for it.
If 0000:040e has an address not pointing to this area, it is
probably an invalid address, and should not be de-referenced
to avoid segfault.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Most arch's CMakeLists.txt contain rules to add compiler and linker
flags for coverage if CONFIG_COVERAGE is enabled, but 4 of them were
missing this.
Instead, set the coverage flags in arch/common/CMakeLists.txt which
affects all archs.
Signed-off-by: Jeremy Bettis <jbettis@chromium.org>
Essential type of RHS operand (64 bit) is wider than essential
type of composite expression in LHS operand (32 bit).
LHS entry_val is 32 bit, and RHS (phys+offset) is 64 bit.
Cast RHS composite expression to the (pentry_t) type.
Found as a coding guideline violation (MISRA R10.7) by static
coding scanning tool.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
During mpu init, we check MSA_frac bits[55:52] and MSA bits[51:48] of
the ID_AA64MMFR0_EL1 register. Currently we only allow 1F to pass the
check. But according to Armv8-R AArch64 manual [1], both 1F and 2F
indicates the processor supports MPU. This commit aims at fixing this.
[1]: https://developer.arm.com/documentation/ddi0600/latest/
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
When SMP enabled, the primary core calls arch_start_cpu to start
secondary cpus. There is an assertion checking the core mpid to make
sure it is called by primary core.
But the checking is bogus. After the first secondary core is brought
up, arm64_cpu_boot_params.mpid will be changed, which will fail the
assertion.
The current solution restores the arm64_cpu_boot_params.mpid.
However, using the arch_curr_cpu()->id == 0 as the assertion will be
better.
The _current_cpu->id will always fail assertion inside this macro
(__ASSERT_NO_MSG(!z_smp_cpu_mobile()), so I use arch_curr_cpu()->id
instead.
Signed-off-by: Jaxson Han <jaxson.han@arm.com>
In unshared FP mode, only 1 thread can use FPU but kernel doesn't know
which one, so riscv arch would enable FPU of each thread.
Signed-off-by: Jim Shu <cwshu@andestech.com>
Also, this eases readability.
The new API can be used any time all FP registers must be manually
saved and restored for an operation.
Signed-off-by: Øyvind Rønningstad <oyvind.ronningstad@nordicsemi.no>
Most of the code for the three exception functions is identical so use
macros to make things easier to read.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
Use the context switch macro for z_arm_cortex_r_svc to be more clear
about the svc call being executed.
Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
The 16 bit bootstrap code for SMP CPUs was using the 286-era "lmsw"
instruction (load machine status word) to set the protected bit in CR0
(which is the modern evolution of the same register), presumably
because this is 16 bit code and we can't move a dword into CR0.
But that's wrong, because the full instruction set *is* available in
real mode on a 386, you just have to use a operand size prefix to get
to it, which the assembler emits for you automatically when you use
the .code16 directive.
Write this conventionally and use modern (e.g. 1986-era) instructions.
It also has the advantage of not confusing much more modern
hypervisors like ACRN by issuing instructions they (and I!) never knew
existed.
Fixes#35076
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Because of a historical misunderstanding, by default the ACRN
hypervisor wants to load Zephyr at address 0x1000 and enter the binary
at that same address. This entry point corresponds to the __start
symbol of the build they were given, which is a 1-cpu non-SMP
configuration. Unfortunately, when we build with
CONFIG_MP_NUM_CPUS=1, the code in locore.S #if's out the 16 bit entry
point for the auxiliary CPUs at the start of the section. So in the
build ACRN received, the start address happened to be 0x7000, the same
address we need to launch the AP processors from.
That's right: under ACRN, the SAME ADDRESS used to enter the OS in 32
bit mode needs to be used later to boot CPUs running in 16 bit real
mode!
The solution, such as it is, is to put a 32 bit jump at the entry
address which hops to the 32 bit OS entry code, and then scribble NOP
instructions over that jump once we get there so that the next time we
reach that address (in real mode) we fall through to the correct
entry.
This patch should be considered a temporary workaround. While it
works on all x86 hardware, it's not really needed. A much better
solution would be to eliminate the locore linker region entirely
(which causes other headaches) and enter the Zephyr binary in a 32 bit
address somewhere in the contiguous high memory area. All that locore
is needed for is the 16 bit bootstrap code for SMP processors, which
is ~6 instructions and can be copied in from the kernel at runtime.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This commit implements the SPARC V8 ABI "Flush windows" software trap.
It enables support for C++ exceptions and longjmp().
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
Shrink the name of the hidden cortex-m option for the
null-pointer dereference detection feature.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
Reduce the length of the Kconfig defines related to
null-pointed dereference detection in Cortex-M.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
If single thread builds are not supported by the
architecture, the MULTITHREADING option should be
prompt-less to block any modifications to it. We
also introduce an explicit ARCH-level Kconfig that
reflects whether the ARCH is capable of single-thread
Zephyr builds.
Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
SECTION_FUNC allows only one function to reside in a sub-section
SECTION_SUBSEC_FUNC allows multiple functions to reside in a sub-section
we should use SECTION_SUBSEC_FUNC for _reset and _start
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
"arm64_cpu_boot_params.mpid" should be assigned to "master_core_mpid"
after secondary CPU core up.
Because "arm64_cpu_boot_params.mpid" is used to check the next up CPU
core's mpid is the excepted mpid. After excepted CPU core up, the
"arm64_cpu_boot_params.mpid" doesn't restore to primary CPU core's mpid
and then the primary CPU core try to up third CPU core will crash in
assertion.
Signed-off-by: Huifeng Zhang <Huifeng.Zhang@arm.com>
ATOMIC_OPERATIONS_BUILTIN issue (internal jira number: P10019563-43273)
has been fixed in new relasese MWDT 2021.03. We can use builtin atomic.
this commit reverts PR: #28528
Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
From the point of checking the info pointer value all code in the
z_multiboot_init() function depends on it being non-NULL. Therefore,
simply return from the function if it's NULL.
Fixes#33084
Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
Every va_start() currently triggers a FPU access trap if FPU is not
already used. This is due to the fact that va_start() must copy FPU
registers that are used for float argument passing into the va_list
object. Flushing the FPU context to its owner and granting access to
the current thread is wasteful if this is only for va_start(),
especially since in most cases there are simply no FP arguments
being passed by the caller.
This is made even worse with exception code (syscalls, IRQ handlers,
etc.) where the exception code has to be resumed with interrupts
disabled upon FPU access as there is no provision for preserving an
interrupted exception mode's FPU context.
Fix those issues by simply simulating the sequence of STR instructions
that the va_start() generates without actually granting FPU access.
We limit ourselves only to exception context to keep changes to a
minimum for now.
This also allows for reverting the ARM64 exception in the nested IRQ
test as it now works properly even if FPU_SHARING is enabled.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
In case CONFIG_NOCACHE_MEMORY=y, the D-Cache need to be clean and
invalidated before enabling the MPU to make sure no data from a
__nocache__ region is present in the D-Cache.
If the D-Cache is disabled, SCB_CleanInvalidateDCache() shall not be
used as it might contains random data for random addresses, and this
might just create a bus fault.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
On reset we do not know what is the status of the D-Cache, nor its
content.
If it is disabled, do not try to clean it, as it might contains random
data for random addresses, and this might just create a bus fault.
Invalidating it is enough.
If it is enabled, it means its content is not random.
SCB_InvalidateDCache() will clean it, invalidate it and disable it.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
When loaded via EFI, we obviously don't have a multiboot info pointer
available (we might have an EFI system table, but zefi doesn't pass
that through yet). Don't try to parse the "whatever garbage was in
%rbp" as a multiboot table.
The configuration is a little clumsy, as strictly our EFI kconfig just
says we're "building for" EFI but not that we'll boot that way. And
tests like arch/x86/info are trying to set CONFIG_MULTIBOOT=n
unconditionally, when it really should be something they detect from
devicetree or wherever.
Fixes#33545
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This adds both boot and pinned sections to the linker
script for ia32. This is required for enabling demand
paging for kernel and data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is exactly one function being defined with TEXT_START
macro so the x86-32 __start can appear at the beginning of
text section. Since no one else is using it, better remove
TEXT_START to simplify things.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The AT instruction gives the corresponding physical address directly.
Much faster than the default implementation.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.
Rework the cache code introducing support for external cache
controllers.
Signed-off-by: Carlo Caione <ccaione@baylibre.com>
For ARCv3 the register is fixed to r30, so we don't need to
configure it at compile-time.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Increase stacks required for ARCv3 64-bit CI to pass.
The CMSIS stacks are for programs in samples/portability
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
ARCv3 64 bit processors doesn't have Zero Delay Loop
(also named Zero Overhead Loop, ZOL) mechanism. Add kconfig
option to remove ZOL register save/restore so the code
can be build for both ARCv2 and ARCv3.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
ARCv2 32 bit and ARCv3 64 bit share the same vector table
structure but with different vector entry size (32 and 64 bit),
so we can easily make vector table bit agnostic.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Make variables where we store CPU registers values and
memory addresses bit agnostic.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Mark the places where we intentionally use st instead of STR for
code common for ARCv2 and ARCv3.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
When we accessing bloated structure member we can exceed u9 operand
in store instruction. So we can use _st32_huge_offset macro instead
for 32 bit accesses
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Rewrite ARC assembler code with asm-compat macroses, so the same
code can be used for both ARCv2 (GNU and MWDT assemblers) and
ARCv3 (GNU assembler)
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
Reuse ARCv2 headers [where it is possible] for ARCv3.
In this commit we simply allow to use them for ARCv3, we'll
move it to proper folder and rename them [where it is required]
in the upcoming cleanup patch.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>