1
0
Fork 0
Commit graph

5879 commits

Author SHA1 Message Date
Sudan Landge
b3fe647eaf arch: arm: cortex_a_r: Fix restore of registers while exiting exception
This commit fixes potential unpredictable behavior, caused by using
the ^ form of ldmia instruction, while exiting an exception in SMP
mode on Cortex-A/R.

Change:
Use "pop" instead of "ldmia" to restore user mode registers while
exiting from an exception via `z_arm_cortex_ar_exit_exc`.

Reason for change:
Processor mode is always set to system (MODE_SYS) before calling
`z_arm_cortex_ar_exit_exc` and hence, the user mode register can be
accessed directly without the ^ form of the instruction. Also, LDMIA
instruction is UNPREDICTABLE in SYStem mode.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2024-08-15 12:10:42 -04:00
Sudan Landge
6d8deac010 arch: arm: cortex_a_r: Fix usage of stmdb while entering an exception
This commit fixes the unpredictable behavior, caused by using the
^ form of stmdb instruction, while entering an exception in SMP mode
on Cortex-A/R.

Change:
Use "push" instead of "stmdb" to store user mode registers on
stack while entering an exception in SYStem mode.

Reason for change:
As reported in discussion/#75339, processor is already in SYS mode
after entering `z_arm_cortex_ar_enter_exc()` in an exception and
using stmdb is UNPREDICTABLE in system mode. Also, the user mode
register can be accessed directly without the ^ form of the
instruction. The solution suggested to fix this is to use
`stmdb sp!, {r0-r3, r12, lr}` which can save the user registers,
update the SP and avoid an extra instruction.
We use "push {}" instruction instead since it is the preferred
mnemonic over `stmdb`.

Signed-off-by: Sudan Landge <sudan.landge@arm.com>
2024-08-15 12:10:42 -04:00
Stan Skowronek
f74592dcde arch: arm: Use voting lock for multi-core boot race condition
Port of similar change in arm64 that eliminates exclusive load/store
instructions, which may not work when MMU/MPU/cache are disabled.

Based on: 7904c6f0f3

Signed-off-by: Stan Skowronek <stan@corellium.com>
2024-08-15 11:58:01 -04:00
Daniel Leung
b0185189ad xtensa: mmu: fix page table initialization
xtensa_mmu_init() is called really early in the boot process
where the _kernel struct has not yet been initialized, and
thus we cannot use it to determine if the current CPU is
the boot CPU. In some cases, this may skip the call to
initialize the page tables which leaves us with incorrect
page table entries. Fix it by using a static variable to
determine whether the page tables have been initialized so
we only do it once per boot.

Fixes 

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-08-14 09:36:19 +02:00
Mikhail Kushnerov
140f9a57b7 arch: x86: ia32: Implement perf stack thrace func
Implement stack trace function for x86_32 arch, that get required
thread register values and unwind stack with it.

Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
2024-08-13 18:28:44 -04:00
Mikhail Kushnerov
1588390907 arch: x86: intel64: make perf stack thrace func
Implement stack trace function for x86_64 arch, that get required
thread register values and unwind stack with it.

Originally-by: Yonatan Goldschmidt <yon.goldschmidt@gmail.com>
Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
2024-08-13 18:28:44 -04:00
Mikhail Kushnerov
a74474c1f9 arch: riscv: Implement perf stack thrace func
Implement stack trace function for riscv arch, that get required
thread register values and unwind stack with it.

Signed-off-by: Mikhail Kushnerov <m.kushnerov@yadro.com>
2024-08-13 18:28:44 -04:00
Flavio Ceolin
f284091073 xtensa: core: Remove constant branch
is_dblexc is constant for targets without MMU. There is no need
to check it when building without MMU.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-08-13 18:18:53 -04:00
Alberto Escolar Piedras
bb9704a422 arch posix: Cleanup old code
When the native simulator use was introduced,
the POSIX architecture and SOC were split in 2 versions:

One was the old version, which remained used by native_posix and the
other NATIVE_APPLICATION based targets.
The new version was a shim on top of the native simulator threading
and CPU start/stop emulation.

This was done to ensure no regressions were introduced in the old
targets while the native simulator was tested and matured.

The old SOC code was removed a small while after, and all
NATIVE_APPLICATION targets moved to use the shim version on top of the
native simulator.

Now we remove also the old arch code, so native_posix and its
NATIVE_APPLICATION kin, also use the native simulator NCT component
instead.

Signed-off-by: Alberto Escolar Piedras <alberto.escolar.piedras@nordicsemi.no>
2024-08-13 18:18:25 -04:00
Anas Nashif
f53d5d5712 arch: move custom arch call Kconfigs
Move from kernel/ to arch/ and have all those Kconfigs in one place.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-12 12:43:36 +02:00
Anas Nashif
a91c6e56c8 arch: use same syntax for custom arch calls
Use same Kconfig syntax for those  custom arch call.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-12 12:43:36 +02:00
Anas Nashif
7f52fc4188 arch: custom cpu_idle and cpu_atomic harmonization
custom arch_cpu_idle and arch_cpu_atomic_idle implementation was done
differently on different architectures. riscv implemented those as weak
symbols, xtensa used a kconfig and all other architectures did not
really care, but this was a global kconfig that should apply to all
architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-12 12:43:36 +02:00
Dawid Niedzwiecki
f39d8bbe2c arm: clear UNALIGN_TRP bit in CCR register
Clear the UNALIGN_TRP bit in the CCR register, if the config
CONFIG_TRAP_UNALIGNED_ACCESS is not set.

Despite the fact that the reset value of UNALIGN_TRP is 0, always clear
the bit. It is useful in double image systems. The new image can't rely
on settings left by the previous image.

Signed-off-by: Dawid Niedzwiecki <dawidn@google.com>
2024-08-12 12:43:16 +02:00
Yong Cong Sin
5107320075 arch: common: isr_tables: add shell command
Add a shell command to dump the isr_tables.

```CONFIG_SYMTAB=n
uart:~$ isr_table sw_isr_table
_sw_isr_table[1035]

   7: 0x800056e2(0)
  11: 0x80005048(0x80008148)
  22: 0x800054ee(0x80008170)
```

```CONFIG_SYMTAB=y
uart:~$ isr_table sw_isr_table
_sw_isr_table[1035]

   7: timer_isr(0)
  11: plic_irq_handler(0x80008188)
  22: uart_ns16550_isr(0x800081b0)
```

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-12 10:10:57 +02:00
Anas Nashif
d590c18672 intel_adsp: ace: call soc_num_cpus_init early
Restore order of execution. Code that was run in EARLY init level is now
too late.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-07 13:50:53 +02:00
Anas Nashif
c79bbfadbb xtensa: move arch_kernel_init code into prep_c
arch_kernel_init() was misused for all architecture initialization code
that is done in prep_c and prior to cstart on other architectures.
arch_kernel_init() is late in the init process and comes after EARLY
init level, making xtensa have a very special boot path.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-07 13:50:53 +02:00
Anas Nashif
42396735bf xtensa: introduce prep_c for xtensa
xtensa is the only architecutre doing thing differently and introduces
inconsistency in the init process and dependencies as we attemp to
cleanup init levels and remove misused of SYS_INIT.

Introduce prep_c for this architecture and align with other
architectures.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-07 13:50:53 +02:00
Anas Nashif
299dddfdce xtensa: remove mention of crt0-app.S
crt0-app.S does not exist, remove it from comments to avoid confusion.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2024-08-07 13:50:53 +02:00
Yong Cong Sin
9698df9dc4 arch: riscv: stacktrace: fix output without ra on the stack top
Account for the scenario when we are doing `esf`-based
unwinding from a function which doesn't have any callee.
In this case the `ra` is not saved on the stack and the
second function from the top of the frame could be missing.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
2024-08-06 17:17:17 -04:00
Mark Holden
e6b683d310 coredump: Guard new kconfig for only supported arch
Add new config, ARCH_SUPPORTS_COREDUMP_THREADS, and
only enable it for ARM CORTEX M where the gdb server
can support it.

Signed-off-by: Mark Holden <mholden@meta.com>
2024-08-02 03:32:09 -04:00
Pieter De Gendt
ad63ca284e kconfig: replace known integer constants with variables
Make the intent of the value clear and avoid invalid ranges with typos.

Signed-off-by: Pieter De Gendt <pieter.degendt@basalte.be>
2024-07-27 20:49:15 +03:00
Yong Cong Sin
7db18ab721 arch: riscv: stacktrace: fix user thread stack bound check
According to the riscv's `arch.h`:

 +------------+ <- thread.arch.priv_stack_start
 | Guard      | } Z_RISCV_STACK_GUARD_SIZE
 +------------+
 | Priv Stack | } CONFIG_PRIVILEGED_STACK_SIZE
 +------------+ <- thread.arch.priv_stack_start +
                   CONFIG_PRIVILEGED_STACK_SIZE +
                   Z_RISCV_STACK_GUARD_SIZE

The start of the privilege stack should be:

  `thread.arch.priv_stack_start + Z_RISCV_STACK_GUARD_SIZE`

Instead of

  `thread.arch.priv_stack_start - CONFIG_PRIVILEGED_STACK_SIZE`

For the `end`, use the same equation of `top_of_priv_stack` in
the `arch_user_mode_enter()`

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-07-27 20:47:51 +03:00
Jimmy Zheng
421f81a243 arch: riscv: remove PMP stack guard for stack overflow handler
When RISCV_ALWAYS_SWITCH_THROUGH_ECALL is enabled, do_swap() enables PMP
checking in is_kernel_syscall.
If the PMP stack guard is triggered and do_swap() is called from the
fault handler, a PMP error occurs because the stack usage violates the
previous PMP setting.

Remove the stack guard setting during a stack overflow handler to allow
enabling PMP checking safely in fault handler.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2024-07-27 15:12:25 +03:00
Jimmy Zheng
58bda887ce arch: riscv: update PMP setting to privileged mode for fault handler
When RISCV_ALWAYS_SWITCH_THROUGH_ECALL is enabled, do_swap() enables PMP
checking in is_kernel_syscall.
If a user thread violates memory protection and do_swap() is called from
the fault handler, a PMP error occurs because the thread is in privileged
mode but still using the old user mode PMP setting.

Update the PMP setting to privileged mode for fault handler.
This also enables the stack guard for user thread's privileged stack in
fault handler.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2024-07-27 15:12:25 +03:00
Jimmy Zheng
ba263da94e arch: riscv: fix exception_depth with RISCV_ALWAYS_SWITCH_THROUGH_ECALL
When 'arch_switch()' switches though Ecall, 'exception_depth' is
incorrectly added to the next thread because the current thread is updated
before arch_switch().
Add 'exception_depth' back to the previous thread when Ecall is called from
'arch_switch()'.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2024-07-12 05:55:26 -04:00
Nicolas Pitre
8d485d4a24 riscv: pmp: actually activate stack overflow protection during boot
Before this, stack protection would be effective only after switching to
the first thread.

Even before the first thread is created, the kernel init code uses the
IRQ stack to set things up. Let's make sure this is safeguarded as well.

This also fixes the incompatibility between CONFIG_RISCV_PMP and
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL, the later needing an exception
call to switch to the first thread and exception code assuming stack
guard is already set up in the PMP.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-07-11 18:22:08 -04:00
Nicolas Pitre
0207c7ff73 riscv: pmp: abstract the MPRV catchall entry and its QEMU workaround
This will be needed at more than one location.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-07-11 18:22:08 -04:00
Wilfried Chauveau
1b820dfa85 arch: arm: cortex_m: add ip & lr to the clobber list
Calls to other function may clobber ip & lr too so these register need to
be added to the clobberlist.
r3 is not actually used in z_arm_switch_to_main_no_multithreading so it is
also removed from the clobber list.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-07-10 11:58:13 -04:00
Sigmund Klåpbakken
0f776cf0bf arch: arm: cmake: Correct endian in output format
Sets the property `PROPERTY_OUTPUT_FORMAT` to `elf32-bigarm` when
`CONFIG_BIG_ENDIAN` is set to `y`.

Signed-off-by: Sigmund Klåpbakken <sigmundklaa@outlook.com>
2024-07-04 18:01:51 -04:00
Sigmund Klåpbakken
5100f1185d arch: arm: cortex_a_r: Set XPSR endianness bit
When this bit is not set, it defaults to 0 (little endian). This
causes issues for big-endian devices, as data will be accessed using
little endian.

Signed-off-by: Sigmund Klåpbakken <sigmundklaa@outlook.com>
2024-07-04 18:01:51 -04:00
Yong Cong Sin
e912a95355 arch: riscv: update the description of INCLUDE_RESET_VECTOR Kconfig
Update the description of the `INCLUDE_RESET_VECTOR` Kconfig so
that it is more clear to the user what it does.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-07-02 09:24:44 +02:00
Daniel Leung
8c10964435 xtensa: mmu: clear ZSR_DEPC_SAVE at boot
ZSR_DEPC_SAVE is being used to determine whether we are faulting
inside double exception if this is not zero. It is possible that
the boot ROM or custom startup code leaves this non-zero, which
would result in a fake triple fault. So clear it at boot. Note
that the zeroing is done in MMU init code as these triple
faults are not actual hardware ones but only semantics, and will
occur once MMU is enabled.

Fixes 

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-28 20:53:37 -04:00
Jordan Yates
91f8c1aea9 everywhere: replace #if IS_ENABLED() as per docs
Replace `#if IS_ENABLED()` with `#if defined()` as recommended by the
documentation of `IS_ENABLED`.

Signed-off-by: Jordan Yates <jordan@embeint.com>
2024-06-28 07:20:32 -04:00
Hess Nathan
8b942e15e2 arch: x86: corrected parameter names
- applied the exact parameter names of the interface to implementation

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-06-27 20:06:20 -04:00
Duy Nguyen
259b3d0095 arch: arm: Add initial support for Cortex-M85 Core
Add initial support for the Cortex-M85 Core which is an implementation
of the Armv8.1-M mainline architecture.

The support is based on the Cortex-M55 support that already exists in
Zephyr.

Signed-off-by: Duy Nguyen <duy.nguyen.xa@renesas.com>
2024-06-26 13:36:14 -04:00
Nicolas Pitre
b5be646e20 arch/arm64/mmu: improve debugging output
Rationalize the debug log to make it easier to use.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-26 13:06:42 -04:00
Nicolas Pitre
d7e9ea0b71 arch/arm64/mmu: fix page table reference counting part 3
Commit f7e11649fd ("arch/arm64/mmu: fix page table reference
counting") missed another case where the freeing of a whole table
"branch" didn't take into account the fact that some sub-tables might
be shared and therefore must be cleared only if the reference count is
down to 1.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-26 13:06:42 -04:00
J. Neuschäfer
a0d1d0619d arm: Fix typo in comment
Change "etxra" to "extra" in Kconfig comment.

Signed-off-by: J. Neuschäfer <j.ne@posteo.net>
2024-06-25 19:13:00 -04:00
Lingao Meng
302422ad9d everywhere: replace double words
import os
import re

common_words = set([
    'about', 'after', 'all', 'also', 'an', 'and',
     'any', 'are', 'as', 'at',
    'be', 'because', 'but', 'by', 'can', 'come',
    'could', 'day', 'do', 'even',
    'first', 'for', 'get', 'give', 'go', 'has',
    'have', 'he', 'her',
    'him', 'his', 'how', 'I', 'in', 'into', 'it',
    'its', 'just',
    'know', 'like', 'look', 'make', 'man', 'many',
    'me', 'more', 'my', 'new',
    'no', 'not', 'now', 'of', 'one', 'only', 'or',
    'other', 'our', 'out',
    'over', 'people', 'say', 'see', 'she', 'so',
    'some', 'take', 'tell', 'than',
    'their', 'them', 'then', 'there', 'these',
    'they', 'think',
    'this', 'time', 'two', 'up', 'use', 'very',
    'want', 'was', 'way',
    'we', 'well', 'what', 'when', 'which', 'who',
    'will', 'with', 'would',
    'year', 'you', 'your'
])

valid_extensions = set([
    'c', 'h', 'yaml', 'cmake', 'conf', 'txt', 'overlay',
    'rst', 'dtsi',
    'Kconfig', 'dts', 'defconfig', 'yml', 'ld', 'sh', 'py',
    'soc', 'cfg'
])

def filter_repeated_words(text):
    # Split the text into lines
    lines = text.split('\n')

    # Combine lines into a single string with unique separator
    combined_text = '/*sep*/'.join(lines)

    # Replace repeated words within a line
    def replace_within_line(match):
        return match.group(1)

    # Regex for matching repeated words within a line
    within_line_pattern =
	re.compile(r'\b(' +
		'|'.join(map(re.escape, common_words)) +
		r')\b\s+\b\1\b')
    combined_text = within_line_pattern.
		sub(replace_within_line, combined_text)

    # Replace repeated words across line boundaries
    def replace_across_lines(match):
        return match.group(1) + match.group(2)

    # Regex for matching repeated words across line boundaries
    across_lines_pattern = re.
		compile(r'\b(' + '|'.join(
			map(re.escape, common_words)) +
			r')\b(\s*[*\/\n\s]*)\b\1\b')
    combined_text = across_lines_pattern.
		sub(replace_across_lines, combined_text)

    # Split the text back into lines
    filtered_text = combined_text.split('/*sep*/')

    return '\n'.join(filtered_text)

def process_file(file_path):
    with open(file_path, 'r', encoding='utf-8') as file:
        text = file.read()

    new_text = filter_repeated_words(text)

    with open(file_path, 'w', encoding='utf-8') as file:
        file.write(new_text)

def process_directory(directory_path):
    for root, dirs, files in os.walk(directory_path):
        dirs[:] = [d for d in dirs if not d.startswith('.')]
        for file in files:
            # Filter out hidden files
            if file.startswith('.'):
                continue
            file_extension = file.split('.')[-1]
            if
	file_extension in valid_extensions:  # 只处理指定后缀的文件
                file_path = os.path.join(root, file)
                print(f"Processed file: {file_path}")
                process_file(file_path)

directory_to_process = "/home/mi/works/github/zephyrproject/zephyr"
process_directory(directory_to_process)

Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
2024-06-25 06:05:35 -04:00
Nicolas Pitre
a4ebd5e8db arch/arm64/mmu: move hardware specifics to private header
None of the moved definitions are meant to be used by any code outside
of arch/arm64/core/mmu.c. Move them away from global scope to the
private header where more such definitions already live.

This is especially relevant as the previous commit fixed some of those
definitions which then caused conflicts with some external SDK that
carries a copy of those original buggy definitions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-24 22:24:50 -04:00
Nicolas Pitre
1e27100442 arch/arm64/mmu: use 64-bit mask values with page table descriptors
Inverting a mask whose type has only 32 bits  doesn't produce the
expected result. Fix those to be 64-bit values.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-24 22:24:50 -04:00
Jordan Yates
07870934e3 everywhere: replace double words
Treewide search and replace on a range of double word combinations:
    * `the the`
    * `to to`
    * `if if`
    * `that that`
    * `on on`
    * `is is`
    * `from from`

Signed-off-by: Jordan Yates <jordan@embeint.com>
2024-06-22 05:40:22 -04:00
Yong Cong Sin
2bb78f3e1a arch: riscv: stacktrace: use current thread if thread is NULL
Zephyr's thread stack size is not fixed, in most cases we would
need the `thread` argument to obtain the `stack_info`, unless
we are unwinding the irq stack, since that is fixed.

Otherwise we can only safely print the current `mepc` register,
unwinding the esf without the stack info of a thread can
result in undefined behavior.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-21 14:55:27 -04:00
Yong Cong Sin
b79a68fc49 arch: riscv: stacktrace: refactor fatal stack bound
Pass the current thread to `walk_stackframe()`, so that we do
not need to hardcode `_current` in `in_fatal_stack_bound()`,
which will allow it to reuse the `in_stack_bound()`

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-21 14:55:27 -04:00
Daniel Leung
c44c922198 soc: xtensa/dc233c: remove xtensa_dc233c_stack_ptr_is_sane
The generic stack pointer checker in the architecture code is
enough so we can remove the platform specific one. Besides,
xtensa_dc233c_stack_ptr_is_sane() does not do much checking
either.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
31c96cf395 xtensa: check stack boundaries during backtrace
This checks for stack boundaries during backtrace to make sure
we are not stepping into invalid memory.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
5b84bb4f4a xtensa: check stack frame pointer before dumping registers
Check that the stack frame pointer is valid before dumping
any registers while handling exceptions. If the pointer is
invalid, anything it points to will probably be also be
invalid. Accessing them may result in another access
violation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
cb9f8b1019 xtensa: separate FATAL EXCEPTION printout into two
It is observed that during logging in fatal exception,
it prints "FATAL EXCEPTION(null)". Exact reason is unknown
as debugging through GDB would make it all work again. So
separating it into two log statements to avoid this situation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
da702463b9 x86: select CONFIG_THREAD_STACK_INFO for exception stack trace
When CONFIG_X86_EXCEPTION_STACK_TRACE is enabled, also forcibly
enable CONFIG_THREAD_STACK_INFO. Without the thread stack info,
it is possible the stack unwinding would go out of the thread
stack and into unknown memory, resulting in hard fault.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-17 21:19:32 -04:00
Flavio Ceolin
36ef3da2d7 arch: xtensa: fatal: Comply with MISRA Rule 14.4
Use boolean expression in a controlling expression.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-06-17 17:46:16 -04:00