Commit graph

4702 commits

Author SHA1 Message Date
Gerard Marull-Paretas
bad523d1aa arch: x86: zefi: support multiple include paths
When legacy mode is enabled, Zephyr includes both include/ and
include/zephyr. Allow the zefi.py script to accept multiple include
paths to cover this scenario.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-05-05 14:26:05 -05:00
Bradley Bolen
88ba97fea4 arch: arm: aarch32: cortex_a_r: Add shared FPU support
This adds lazy floating point context switching.  On svc/irq entrance,
the VFP is disabled and a pointer to the exception stack frame is saved
away.  If the esf pointer is still valid on exception exit, then no
other context used the VFP so the context is still valid and nothing
needs to be restored.  If the esf pointer is NULL on exception exit,
then some other context used the VFP and the floating point context is
restored from the esf.

The undefined instruction handler is responsible for saving away the
floating point context if needed.  If the handler is in the first
irq/svc context and the current thread uses the VFP, then the float
context needs to be saved.  Also, if the handler is in a nested context
and the previous context was using the FVP, save the float context.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Stephanos Ioannidis
80bd814131 arch: arm: cortex_r: Initialise VFP D32 registers for DCLS
This commit updates the Cortex-R reset routine to initialise
(synchronise) the VFP D16-D31 registers when Dual-redundant Core
Lock-step (DCLS) is enabled.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-05-05 12:03:27 +09:00
Bradley Bolen
7f44e28619 arch: arm: aarch32: Create z_arm_floating_point_init() for Cortex-R
This will enable the VFP unit on boot to handle the case where
FPU_SHARING is not enabled.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Bradley Bolen
7c1e399179 arch: arm: aarch32: Create a fpu stack frame
Grouping the FPU registers together will make adding FPU support for
Cortex-A/R easier later.  It provides the ability to get the sizeof and
offsetof FPU registers easier.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Bradley Bolen
3f7162fc07 arch: arm: aarch32: Rearrange exception stack frame
Cortex-A/R use a descending stack frame and the hardware does not help
with the stacking.  This led to some less than desirable workarounds in
the exception code where the basic stack frame was saved twice.
Rearranging the order of the exception stack frame removes that problem
and provides a clearer path to saving CPU context in a fully descending
manner.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-05-05 12:03:27 +09:00
Stephanos Ioannidis
5181c61797 arch: arm: Add unified floating-point configuration symbols
This commit adds the unified floating-point configuration symbols for
the ARM architectures.

These configuration symbols allow specification of the floating-point
coprocessors, such as VFP (also known as FP for Cortex-M) and NEON,
for the ARM architectures.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-05-05 12:03:27 +09:00
Flavio Ceolin
f5a0d4cd26 arch: xtensa: Optimize cache management for pinned threads
When building with CONFIG_SCHED_CPU_MASK_PIN_ONLY we can assume that a
thread will always be executed in a same CPU and consequently skip the
cache invalidation.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2022-05-04 13:46:48 -04:00
Andy Ross
e931b7ba47 arch/x86: Use EFI console as default printk handler
Where we have access to a bootstrap UEFI environment, it's productive
to use that console as the default printk handler.  That avoids the
bringup hassle of trying to configure UART settings blindly, as has
been customary.  It also emits nice text to the framebuffer on devices
with no serial port or other debug harness at all.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-05-04 11:34:55 +03:00
Nicolas Pitre
f51d89df30 riscv: pmp: work around a QEMU bug
The NAPOT mode isn't computed properly in qemu when the full address
range is covered. Let's hardcode the value that the qemu code checks
explicitly until the appropriate fix is applied to qemu itself.

For reference, here's the qemu patch:
https://lists.gnu.org/archive/html/qemu-devel/2022-04/msg00961.html

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre
ec9c2ec2d8 riscv: pmp: rename CONFIG_PMP_SLOT
The plural form is clearer.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre
554f24661f riscv: pmp: remove previous implementation
Overall diffstat with the new PMP code in place:

 18 files changed, 866 insertions(+), 1372 deletions(-)

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre
2fece49a14 riscv: pmp: switch over to the new implementation
Add the appropriate hooks effectively replacing the old implementation
with the new one.

Also the stackguard wasn't properly enforced especially with the
usermode combination. This is now fixed.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre
7a55bda7e1 riscv: pmp: add new usermode support
The idea here is to compute the PMP register set on demand i.e. upon
scheduling in the affected threads, and only if changes occurred.
A simple sequence number is used to stay in sync with the latest update.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre
68b8f0e5ce riscv: pmp: new stackguard implementation
Stackguard uses the PMP to prevents many types of stack overflow by
making any access to the bottom stack area raise a CPU exception. Each
thread has its set of precomputed PMP entries and those are written to
PMP registers at context switch time.

This is the code to set it up. It will be connected later.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Nicolas Pitre
2e66da3bc3 riscv: pmp: new implementation
This is the core code to manage PMP entries with only the global entries
initialisation for now. It is not yet linked into the build.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-29 15:30:00 +02:00
Evgeniy Paltsev
9ce0d31c33 ARC: SMP: debug: workaround MDB changing debug_select value
MDB debugger may modify debug_select and debug_mask registers
on start, so we can't rely on debug_select reset value.

Let's set correct value on primary CPU without reading initial
value from debug_select.

Internal ID: P10019563-50516

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2022-04-29 12:34:21 +02:00
Keith Packard
f623571a73 riscv: Initialize TP register when starting threads
Set TP in exception context so that it gets loaded into the CPU when
first running the thread. Set TP for secondary cores to related idle TLS
area.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-04-28 11:09:01 +09:00
Keith Packard
1638d4851e arch/arm: Use TPIDRURO on cortex-a too
V7-A also supports TPIDRURO, so go ahead and use that for TLS, enabling
thread local storage for the other ARM architectures.

Add __aeabi_read_tp function in case code was compiled to use that.

Signed-off-by: Keith Packard <keithp@keithp.com>
2022-04-28 11:09:01 +09:00
Andy Ross
64a3159dee arch/xtensa: Optimize cache management on context switch
Making context switch cache-coherent in SMP is hard.  The
KERNEL_COHERENCE handling was conservatively invalidating the stack
region of a thread that was being switched in.  This was because it
might have (1) run on this CPU in the past, but (2) run most recently
on a different CPU.  In that case we might have stale data still in
our local dcache!

But this has performance impact in the (very common!) case of a thread
being switched out briefly and then back in (e.g. k_sleep() for a
small duration).  It will come back having lost all of its cached
stack context, and will have to fetch all that information back from
shared SRAM!

Treat this by tracking a "last_cpu" for each thread in the arch part
of the thread struct.  If we're coming back to the same CPU we left,
we know we can skip the invalidate.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2022-04-27 18:54:10 -04:00
Nicolas Pitre
f61b8b8c16 semihosting: fix inline assembly output dependency
Commit d8f186aa4a ("arch: common: semihost: add semihosting
operations") encapsulated semihosting invocation in a per-arch
semihost_exec() function. There is a fixed register variable declaration
for the return value but this variable is not listed as an output
operand to respective inline assembly segments which is an error.
This is not reported as such by gcc and the generated code is still OK
in those particular instances but this is not guaranteed, and clang
does complain about such cases.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-24 19:46:15 +02:00
Anas Nashif
399a0b4b31 debug: generate call graph profile data using gprof
This will generate profile data that can be analyzed using gprof. When
you build the application (currently for native_posix only), after
running the application you will get a file "gmon.out" with the call
graph which can be processed with gprof:

  gprof build/zephyr/zephyr.exe gmon.out > analysis.txt

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2022-04-22 16:04:08 -04:00
Jordan Yates
d8f186aa4a arch: common: semihost: add semihosting operations
Add an API that utilizes the ARM semihosting mechanism to interact with
the host system when a device is being emulated or run under a debugger.

RISCV is implemented in terms of the ARM implementation, and therefore
the ARM definitions cross enough architectures to be defined 'common'.

Functionality is exposed as a separate API instead of syscall
implementations (`_lseek`, `_open`, etc) due to various quirks with
the ARM mechanisms that means function arguments are not standard.

For more information see:
https://developer.arm.com/documentation/dui0471/m/what-is-semihosting-

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>

impl
2022-04-21 13:04:52 +02:00
Jordan Yates
070422db46 arch: common: dedicated SEMIHOST symbol
Control the usage of semihosting with a dedicated symbol, instead of
implying semihosting from the usage of `SEMIHOST_CONSOLE`. This allows
semihosting to be used without the semihost console.

Signed-off-by: Jordan Yates <jordan.yates@data61.csiro.au>
2022-04-21 13:04:52 +02:00
Mahesh Mahadevan
b2d3fdceff cmake: Add support to add symbols to ramfunc section
This PR allows the user to add symbols to the ramfunc
section. The use for this could be as follows:

zephyr_linker_sources_ifdef(CONFIG_ARCH_HAS_RAMFUNC_SUPPORT
  RAMFUNC_SECTION
  quick_access_code.ld
)

quick_access_code.ld (as shown below) can define additional
symbols to  go into the ramfunc section

. = ALIGN(4);
KEEP(*(CodeQuickAccess))

Signed-off-by: Mahesh Mahadevan <mahesh.mahadevan@nxp.com>
2022-04-18 17:24:12 -07:00
Stephanos Ioannidis
f9a3f02b86 x86: Initialise FPU regs during thread creation for eager FPU sharing
When "eager FPU sharing" mode is enabled, FPU registers must be
initialised at the time of thread creation because the floating-point
context is always active and no further FPU initialisation is performed
later.

Note that, in case of the "lazy FPU sharing" mode, floating-point
context is inactive by default and the FPU is initialised when the
first floating-point instruction is executed.

Refer to the issue #44902 for more details.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2022-04-18 17:23:48 -07:00
Ryan McClelland
f7ddcd2713 arch: arm: aarch32: initialize FPSCR to reset value for ARMv8.1
With GCC 11 now supporting low overhead branching in ARMv8.1, ASM "LE"
(loop-end) instructions would trigger an INVSTATE hard-fault after
FPSCR was set to 0. This was due to the FPSCR getting a new field in
ARMv8.1. LTPSIZE is now set to it's reset value of Tail predication not
applied.

Signed-off-by: Ryan McClelland <ryanmcclelland@fb.com>
2022-04-15 10:33:48 -07:00
Ryan McClelland
c5b59282d6 arch: arm: aarch32: add Kconfig for arm cortex-m that implements a cache
The Cache is an optional configuration of both the ARM Cortex-M7 and
Cortex-M55. Previously, it was just checking that it was just an M7
rather than knowing that the CPU actually was built with the cache.

Signed-off-by: Ryan McClelland <ryanmcclelland@fb.com>
2022-04-14 16:12:03 -05:00
Immo Birnbaum
60ee14db96 arch: arm: aarch32: remove unnecessary "EOF" comments
remove unnecessary EOF comment lines at the end of each file.

Signed-off-by: Immo Birnbaum <Immo.Birnbaum@weidmueller.com>
2022-04-14 14:43:52 -05:00
Ederson de Souza
c0b7864840 arch/xtensa: Enable backtrace on panic on Intel ADSP platforms
Platform specific functions necessary to enable this feature were
implemented (z_xtensa_ptr_executable() and
z_xtensa_stack_ptr_is_sane() for Intel ADSP platforms.

Current implementation just ensures stack pointer and program counter
are within relevant areas defined in the linker scripts, without going
too fine grained.

Also, `.iram1` section, used by the backtrace code, also added to
Intel ADSP linker script.

Finally, update west manifest to use up-to-date SOF, which contains a
patch to fix build issues related to the linker changes.

Signed-off-by: Ederson de Souza <ederson.desouza@intel.com>
2022-04-14 11:03:40 -04:00
Mark Holden
eba9c872b1 coredump: Add callee registers to arm arch block
Add version 2 to coredump arm_arch_block
which includes callee registers

Signed-off-by: Mark Holden <mholden@fb.com>
2022-04-13 13:26:37 -07:00
Mateusz Sierszulski
ded324c61d arch: arm: change dependency on CODE_DATA_RELOCATION
This commit changes the CODE_DATA_RELOCATON dependency by
adding CPU_AARCH32_CORTEX_R next to CPU_CORTEX_M.

Signed-off-by: Mateusz Sierszulski <msierszulski@antmicro.com>
2022-04-11 10:17:14 +02:00
Bradley Bolen
570c254eda arch: arm: aarch32: ARM_STORE_EXC_RETURN only applies to Cortex-M
Cortex-M code is the only flavor that supports switching between secure
and non-secure state so make sure this kconfig only applies to it.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-04-11 10:16:41 +02:00
Bradley Bolen
fd2aab3861 arch: arm: aarch32: Fix when mode offset is defined
Commit a2cfb8431d ("arch: arm: Add code for swapping threads between
secure and non-secure") changed the mode variable in the _thread_arch to
be defined by ARM_STORE_EXC_RETURN or USERSPACE.  The generated offset
define for mode was enabled by FPU_SHARING or USERSPACE.  This broke
Cortex-R with FPU, but with ARM_STORE_EXC_RETURN disabled.  Reconcile
the checks.

Signed-off-by: Bradley Bolen <bbolen@lexmark.com>
2022-04-11 10:16:41 +02:00
Daniel Leung
7a431dca95 x86: qemu: add a newline after "Booting from ROM.."
Under QEMU and SeaBIOS, everything gets to be printed
immediately after "Booting from ROM.." as there is no newline.
This prevents parsing QEMU console output for the very first
line where it needs to match from the beginning of the line.
So add a dummy newline here so the next output is at
the beginning of a line.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2022-04-08 15:48:41 -07:00
Martí Bolívar
f433001185 Kconfig: move CONFIG_BOARD to boards/Kconfig
Moving this option to the subdirectory for boards might make it easier
to find, and will keep it next to some other board-related Kconfig
options set in the same file.

Signed-off-by: Martí Bolívar <marti.bolivar@nordicsemi.no>
2022-04-08 10:30:54 -07:00
Nicolas Pitre
563a8d11a4 arm64: refer to the link register as "lr" rather than "x30"
In ARM parlance, the subroutine call return address is stored in the
"link register" or simply lr. Refer to it as lr which is clearer than
the anonymous x30 designation.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2022-04-07 16:31:30 -05:00
Jiafei Pan
227d1ea1bb arm64: mmu: provide more memory mapping types for z_phys_map()
ARM64 supports more memory mapping types for device memory (nGnRnE,
nGnRE, GRE), add these mapping support for os common mapping API
function z_phys_map().

Signed-off-by: Jiafei Pan <Jiafei.Pan@nxp.com>
2022-04-05 11:17:47 +02:00
Jimmy Brisson
89d0553ca9 cortex-m: Clear pending mpu fault during mpu fault
This is a strange one: The printing code pushes a floating point
register, and is called during the mpu falt. If the floating point
registers are lazily stacked, this fp push can cause another mpu
fault to be pending during the current mpu fault, and tail chained
without returning to PendSV. Since we're already cleaning up the
fp execption reason, we might as well also clean up thisp pending,
spurious mpu exception.

Signed-off-by: Jimmy Brisson <jimmy.brisson@linaro.org>
2022-04-01 09:16:27 -05:00
Jimmy Brisson
35f9a5d715 cortex-m: Abort pending SVC when a thread is killed
If an SVC was pending during the stack overflow, it will run
after the return of the memory manage fault. To the SVC's misfortune of
the SVC handler, the it's invariant, that PSP point to the
hardware-stacked context is no longer valid. When the user has a
k_sys_fatal_error_handler that tries to kill the thread that caused a
stack overflow, this manifests as the svc reading the memory of whatever
is on the stack after being adjusted by the mem manage fault handler, and
that leads to unending, spurious hard faults, locking up the system.

This patch prevents that.

Signed-off-by: Jimmy Brisson <jimmy.brisson@linaro.org>
2022-04-01 09:16:27 -05:00
Nathan Krueger
6a5520c626 arch/riscv: Adding KConfig options for 'A' and 'M' RISC-V extensions
New KConfig options for 'A' and 'M' RISC-V extensions have been
added.  These are used to configure the '-march' string used by GCC
to produce a compatible binary for the requested RISC-V variant.
In order to maintain compatibility with all currently defined SoC,
default the options for HW mul / Atomics support to 'y', but allow
them to be overridden for any SoC which does not support these.

I tested this change locally via twister agaisnt a few RISC-V platforms
including some 32bit and 64bit. To verify the 4 possibilities of Atomics
& HW Mul: (No, No), (No, Yes), (Yes, No), (Yes, Yes -- current behavior),
I used an out-of-tree GCC (xPack RISC-V GCC) which has multilib support
for rv32i, rv32ia, rv32ima to test against our out-of-tree Intel Nios V/m
processor in HW.  The Zephyr SDK RISCV GCC currently does not contain
multilib support for all variants exposed by these new KConfig options.

Signed-off-by: Nathan Krueger <nathan.krueger@intel.com>
2022-03-22 18:00:32 -04:00
Tomasz Bursztyka
1d3dbd49e1 arch/x86: Initialize early serial a tiny bit later
In case of EFI, efi_init must be called before initializing early
serial: if that one as X86_SOC_EARLY_SERIAL_PCIDEV defined, its pcie
access will try to initialise pcie mmio access which one will try to
find an ACPI table. At this point, calling ACPI API prior to initialize
EFI will make RSDP looked up already... and since it cannot find it
without EFI being initialized first, ACPI is then broken.

Just moving early serial to initialize after multiboot/efi being setup.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka
abf079ce86 arch/x86: Get ACPI RSDP from EFI
EFI may have provide that pointer alread, so let's get it first.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka
f78a4ab7cf zefi: Add an EFI boot argument passing ACPI RSDP info
If such table pointer is present with EFI system table, this will speed
up ACPI initialization later on.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka
b51a5d3d7c zefi: Adding status code to header
This will be usefull when calling EFI functions.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka
c7090c5ee6 zefi: Expose EFI configuration lookup function
This will be useful to get various information such as ACPI table
pointer etc...

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka
27df16ea8e arch/x86: Prepare EFI support
As for Multiboot, let prep_c be aware of EFI boot.
In the futur, EFI will pass an argument to it.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka
f19f9db8df arch/x86: Expand cpu boot argument
In order to mitigate at runtime whether it booted on multiboot or EFI,
let's introduce a dedicated x86 cpu argument structure which holds the
type and the actual pointer delivered by the method (multiboot_info, or
efi_system_table)

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka
9fb80d04b4 arch/x86: Expose multiboot init function even when disabled
Just a dummy function will do.

When enabled, the code does not need the #ifdef as cmake is handling
this properly already. This was also the wrong CONFIG_ used there
anyway.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00
Tomasz Bursztyka
dd7e012458 zefi: Improve generic EFI header
This will prove to be useful to get a better EFI support.

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2022-03-22 09:56:54 -04:00