Commit graph

5555 commits

Author SHA1 Message Date
Jimmy Zheng
ba263da94e arch: riscv: fix exception_depth with RISCV_ALWAYS_SWITCH_THROUGH_ECALL
When 'arch_switch()' switches though Ecall, 'exception_depth' is
incorrectly added to the next thread because the current thread is updated
before arch_switch().
Add 'exception_depth' back to the previous thread when Ecall is called from
'arch_switch()'.

Signed-off-by: Jimmy Zheng <jimmyzhe@andestech.com>
2024-07-12 05:55:26 -04:00
Nicolas Pitre
8d485d4a24 riscv: pmp: actually activate stack overflow protection during boot
Before this, stack protection would be effective only after switching to
the first thread.

Even before the first thread is created, the kernel init code uses the
IRQ stack to set things up. Let's make sure this is safeguarded as well.

This also fixes the incompatibility between CONFIG_RISCV_PMP and
CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL, the later needing an exception
call to switch to the first thread and exception code assuming stack
guard is already set up in the PMP.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-07-11 18:22:08 -04:00
Nicolas Pitre
0207c7ff73 riscv: pmp: abstract the MPRV catchall entry and its QEMU workaround
This will be needed at more than one location.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-07-11 18:22:08 -04:00
Wilfried Chauveau
1b820dfa85 arch: arm: cortex_m: add ip & lr to the clobber list
Calls to other function may clobber ip & lr too so these register need to
be added to the clobberlist.
r3 is not actually used in z_arm_switch_to_main_no_multithreading so it is
also removed from the clobber list.

Signed-off-by: Wilfried Chauveau <wilfried.chauveau@arm.com>
2024-07-10 11:58:13 -04:00
Sigmund Klåpbakken
0f776cf0bf arch: arm: cmake: Correct endian in output format
Sets the property `PROPERTY_OUTPUT_FORMAT` to `elf32-bigarm` when
`CONFIG_BIG_ENDIAN` is set to `y`.

Signed-off-by: Sigmund Klåpbakken <sigmundklaa@outlook.com>
2024-07-04 18:01:51 -04:00
Sigmund Klåpbakken
5100f1185d arch: arm: cortex_a_r: Set XPSR endianness bit
When this bit is not set, it defaults to 0 (little endian). This
causes issues for big-endian devices, as data will be accessed using
little endian.

Signed-off-by: Sigmund Klåpbakken <sigmundklaa@outlook.com>
2024-07-04 18:01:51 -04:00
Yong Cong Sin
e912a95355 arch: riscv: update the description of INCLUDE_RESET_VECTOR Kconfig
Update the description of the `INCLUDE_RESET_VECTOR` Kconfig so
that it is more clear to the user what it does.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-07-02 09:24:44 +02:00
Daniel Leung
8c10964435 xtensa: mmu: clear ZSR_DEPC_SAVE at boot
ZSR_DEPC_SAVE is being used to determine whether we are faulting
inside double exception if this is not zero. It is possible that
the boot ROM or custom startup code leaves this non-zero, which
would result in a fake triple fault. So clear it at boot. Note
that the zeroing is done in MMU init code as these triple
faults are not actual hardware ones but only semantics, and will
occur once MMU is enabled.

Fixes #75194

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-28 20:53:37 -04:00
Jordan Yates
91f8c1aea9 everywhere: replace #if IS_ENABLED() as per docs
Replace `#if IS_ENABLED()` with `#if defined()` as recommended by the
documentation of `IS_ENABLED`.

Signed-off-by: Jordan Yates <jordan@embeint.com>
2024-06-28 07:20:32 -04:00
Hess Nathan
8b942e15e2 arch: x86: corrected parameter names
- applied the exact parameter names of the interface to implementation

Signed-off-by: Hess Nathan <nhess@baumer.com>
2024-06-27 20:06:20 -04:00
Duy Nguyen
259b3d0095 arch: arm: Add initial support for Cortex-M85 Core
Add initial support for the Cortex-M85 Core which is an implementation
of the Armv8.1-M mainline architecture.

The support is based on the Cortex-M55 support that already exists in
Zephyr.

Signed-off-by: Duy Nguyen <duy.nguyen.xa@renesas.com>
2024-06-26 13:36:14 -04:00
Nicolas Pitre
b5be646e20 arch/arm64/mmu: improve debugging output
Rationalize the debug log to make it easier to use.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-26 13:06:42 -04:00
Nicolas Pitre
d7e9ea0b71 arch/arm64/mmu: fix page table reference counting part 3
Commit f7e11649fd ("arch/arm64/mmu: fix page table reference
counting") missed another case where the freeing of a whole table
"branch" didn't take into account the fact that some sub-tables might
be shared and therefore must be cleared only if the reference count is
down to 1.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-26 13:06:42 -04:00
J. Neuschäfer
a0d1d0619d arm: Fix typo in comment
Change "etxra" to "extra" in Kconfig comment.

Signed-off-by: J. Neuschäfer <j.ne@posteo.net>
2024-06-25 19:13:00 -04:00
Lingao Meng
302422ad9d everywhere: replace double words
import os
import re

common_words = set([
    'about', 'after', 'all', 'also', 'an', 'and',
     'any', 'are', 'as', 'at',
    'be', 'because', 'but', 'by', 'can', 'come',
    'could', 'day', 'do', 'even',
    'first', 'for', 'get', 'give', 'go', 'has',
    'have', 'he', 'her',
    'him', 'his', 'how', 'I', 'in', 'into', 'it',
    'its', 'just',
    'know', 'like', 'look', 'make', 'man', 'many',
    'me', 'more', 'my', 'new',
    'no', 'not', 'now', 'of', 'one', 'only', 'or',
    'other', 'our', 'out',
    'over', 'people', 'say', 'see', 'she', 'so',
    'some', 'take', 'tell', 'than',
    'their', 'them', 'then', 'there', 'these',
    'they', 'think',
    'this', 'time', 'two', 'up', 'use', 'very',
    'want', 'was', 'way',
    'we', 'well', 'what', 'when', 'which', 'who',
    'will', 'with', 'would',
    'year', 'you', 'your'
])

valid_extensions = set([
    'c', 'h', 'yaml', 'cmake', 'conf', 'txt', 'overlay',
    'rst', 'dtsi',
    'Kconfig', 'dts', 'defconfig', 'yml', 'ld', 'sh', 'py',
    'soc', 'cfg'
])

def filter_repeated_words(text):
    # Split the text into lines
    lines = text.split('\n')

    # Combine lines into a single string with unique separator
    combined_text = '/*sep*/'.join(lines)

    # Replace repeated words within a line
    def replace_within_line(match):
        return match.group(1)

    # Regex for matching repeated words within a line
    within_line_pattern =
	re.compile(r'\b(' +
		'|'.join(map(re.escape, common_words)) +
		r')\b\s+\b\1\b')
    combined_text = within_line_pattern.
		sub(replace_within_line, combined_text)

    # Replace repeated words across line boundaries
    def replace_across_lines(match):
        return match.group(1) + match.group(2)

    # Regex for matching repeated words across line boundaries
    across_lines_pattern = re.
		compile(r'\b(' + '|'.join(
			map(re.escape, common_words)) +
			r')\b(\s*[*\/\n\s]*)\b\1\b')
    combined_text = across_lines_pattern.
		sub(replace_across_lines, combined_text)

    # Split the text back into lines
    filtered_text = combined_text.split('/*sep*/')

    return '\n'.join(filtered_text)

def process_file(file_path):
    with open(file_path, 'r', encoding='utf-8') as file:
        text = file.read()

    new_text = filter_repeated_words(text)

    with open(file_path, 'w', encoding='utf-8') as file:
        file.write(new_text)

def process_directory(directory_path):
    for root, dirs, files in os.walk(directory_path):
        dirs[:] = [d for d in dirs if not d.startswith('.')]
        for file in files:
            # Filter out hidden files
            if file.startswith('.'):
                continue
            file_extension = file.split('.')[-1]
            if
	file_extension in valid_extensions:  # 只处理指定后缀的文件
                file_path = os.path.join(root, file)
                print(f"Processed file: {file_path}")
                process_file(file_path)

directory_to_process = "/home/mi/works/github/zephyrproject/zephyr"
process_directory(directory_to_process)

Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
2024-06-25 06:05:35 -04:00
Nicolas Pitre
a4ebd5e8db arch/arm64/mmu: move hardware specifics to private header
None of the moved definitions are meant to be used by any code outside
of arch/arm64/core/mmu.c. Move them away from global scope to the
private header where more such definitions already live.

This is especially relevant as the previous commit fixed some of those
definitions which then caused conflicts with some external SDK that
carries a copy of those original buggy definitions.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-24 22:24:50 -04:00
Nicolas Pitre
1e27100442 arch/arm64/mmu: use 64-bit mask values with page table descriptors
Inverting a mask whose type has only 32 bits  doesn't produce the
expected result. Fix those to be 64-bit values.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-24 22:24:50 -04:00
Jordan Yates
07870934e3 everywhere: replace double words
Treewide search and replace on a range of double word combinations:
    * `the the`
    * `to to`
    * `if if`
    * `that that`
    * `on on`
    * `is is`
    * `from from`

Signed-off-by: Jordan Yates <jordan@embeint.com>
2024-06-22 05:40:22 -04:00
Yong Cong Sin
2bb78f3e1a arch: riscv: stacktrace: use current thread if thread is NULL
Zephyr's thread stack size is not fixed, in most cases we would
need the `thread` argument to obtain the `stack_info`, unless
we are unwinding the irq stack, since that is fixed.

Otherwise we can only safely print the current `mepc` register,
unwinding the esf without the stack info of a thread can
result in undefined behavior.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-21 14:55:27 -04:00
Yong Cong Sin
b79a68fc49 arch: riscv: stacktrace: refactor fatal stack bound
Pass the current thread to `walk_stackframe()`, so that we do
not need to hardcode `_current` in `in_fatal_stack_bound()`,
which will allow it to reuse the `in_stack_bound()`

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-21 14:55:27 -04:00
Daniel Leung
c44c922198 soc: xtensa/dc233c: remove xtensa_dc233c_stack_ptr_is_sane
The generic stack pointer checker in the architecture code is
enough so we can remove the platform specific one. Besides,
xtensa_dc233c_stack_ptr_is_sane() does not do much checking
either.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
31c96cf395 xtensa: check stack boundaries during backtrace
This checks for stack boundaries during backtrace to make sure
we are not stepping into invalid memory.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
5b84bb4f4a xtensa: check stack frame pointer before dumping registers
Check that the stack frame pointer is valid before dumping
any registers while handling exceptions. If the pointer is
invalid, anything it points to will probably be also be
invalid. Accessing them may result in another access
violation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
cb9f8b1019 xtensa: separate FATAL EXCEPTION printout into two
It is observed that during logging in fatal exception,
it prints "FATAL EXCEPTION(null)". Exact reason is unknown
as debugging through GDB would make it all work again. So
separating it into two log statements to avoid this situation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-21 09:59:36 +02:00
Daniel Leung
da702463b9 x86: select CONFIG_THREAD_STACK_INFO for exception stack trace
When CONFIG_X86_EXCEPTION_STACK_TRACE is enabled, also forcibly
enable CONFIG_THREAD_STACK_INFO. Without the thread stack info,
it is possible the stack unwinding would go out of the thread
stack and into unknown memory, resulting in hard fault.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-17 21:19:32 -04:00
Flavio Ceolin
36ef3da2d7 arch: xtensa: fatal: Comply with MISRA Rule 14.4
Use boolean expression in a controlling expression.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2024-06-17 17:46:16 -04:00
Daniel Leung
c1a462e1a5 xtensa: mmu: bail on semantic triple faults
There actually is no triple faults on Xtensa. Once PS.EXCM is
set, it keeps going through double exception vector for any
new exceptions. However, our exception code needs to unmask
PS.EXCM to enable register window operations. So after that,
any new exceptions will go through the kernel or user vectors
depending on PS.UM. If there is continuous faults, it may
keep ping-ponging between double and kernel/user exception
vectors that may never get resolved. Since we stash DEPC
during double exception, and the stashed one is only cleared
once the double exception has been processed, we can use
the stashed DEPC value to detect if the next exception could
be considered a triple fault. If such a case exists, simply
jump to an infinite loop, or quit the simulator, or invoke
debugger.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
d344a6bc85 xtensa: make arch_user_string_nlen actually work
arch_user_string_nlen() did not exactly work correctly as any
invalid pointers being passed are de-referenced naively, which
results in DTLB misses (MMU) or access errors (MPU). However,
arch_user_string_nlen() should always return to the caller
with appropriate error code set, and should never result in
thread termination. Since we are usually going through syscalls
when arch_user_string_nlen() is called, for MMU, the DTLB miss
goes through double exception. Since the pointer is invalid,
there is a high chance there is not even a L2 page table
associated with that bad address. So the DTLB miss cannot be
handled and it just keeps looping in double exception until
there is another exception type where we get to the C handler.
However, the stack frame is no longer the frame associated
with the call to arch_user_string_nlen(), and the call return
address would be incorrect. Forcing this incorrect address as
the next PC would result in some other exceptions, e.g.
illegal instruction, which would go to the C handler again.
This time it will go to the end of handler and would result
in thread termination. For MPU systems, access errors would
simply result in thread terminal in the C handler. Because of
these reasons, change the arch_user_string_nlen() to check if
the memory region can be accessed under kernel mode first
before feeding it to strnlen().

Also remove the exception fixup arrays as there is nothing
there anymore.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
79939e3279 xtensa: mmu: mpu: add xtensa_mem_kernel_has_access()
This adds a new function xtensa_mem_kernel_has_access() to
determine if a memory region can be accessed by kernel threads.
This allows checking for valid mapped memory before accessing
them to avoid relying on page faults to detect invalid access.

Also fixed an issue with arch_buffer_validate() on MPU where
it may return okay even if the incoming memory region has no
corresponding entry in the MPU table.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
61ec0d15d5 xtensa: mmu: arch_buffer_validate is only for user thread
arch_buffer_validate() is only to verify that user threads have
access to the memory region. It should not be used to verify
if kernel thread has access (which they should anyway). So
change the logic.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
27f4e7fe0c xtensa: only use BREAK if explicitly enabled
Introduce CONFIG_XTENSA_BREAK_ON_UNRECOVERABLE_EXCEPTIONS to
use BREAK instruction for unrecoverable exceptions. This
definitely requires debugger to be attached to the hardware
or simulator to catch that.

Also move the infinite loop to NOT result in an infinite
interrupt storm as the debug interrupt will be triggered over
and over again. Same for the simcall exit as it does not
need to be called repetitively.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
bc3e77b356 xtensa: make it work with TLB misses during interrupt handling
If there are any TLB misses during interrupt handling,
the user, kernel and double exception vector will be triggered
for the miss and the DEPC and EXCCAUSE overwritten as the TLB
missse are be handled in the assembly code and execution
returned to the original vector code. Because of this, both
DEPC and EXCCAUSE being read in the C handler are not the ones
that triggered the original exception (for example, level-1
interrupt). So stash both DEPC and EXCCAUSE such that
the original cause of exception is visible in the C handler.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
371ad016f8 xtensa: no need to clear DEPC on C handler exit for MPU
Xtensa MPU code does not handle double exception in C. So there
is no need to clear DEPC on C handler exit.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
b696257eb2 xtensa: fix getting exccause during backtrace
We have frame pointer struct and BSA struct to extract
the exception cause (exccause). There is no need to
resort to custom assembly to do that. Besides, given
that the BSA is different between different Xtensa cores,
there is no guarantee it is at the same place as what
the assembly assumes. So just do that without assembly.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Daniel Leung
682b572414 xtensa: remove ZSR_MMU_0 and ZSR_MMU_1
They are not being used in the code so there is no need to
reserve them as scratch registers.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-15 04:44:48 -04:00
Raffael Rostagno
9265c82313 soc: esp32c6: Kconfig and .ld updates, DTS and comments fix
Kconfig, .ld and comments fixing
Fixed address of UART1, WDT and RTC timer disabled by default

Signed-off-by: Raffael Rostagno <raffael.rostagno@espressif.com>
2024-06-14 18:51:46 -04:00
Nicolas Pitre
696edbe841 arm64: move simple memcpy/memset alternatives to assembly
Assembly implementation for z_early_memset() and z_early_memcpy().
Otherwise the compiler will happily replace our C code with a direct
call to memset/memcpy which kind of defeats the purpose.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-14 17:11:40 -04:00
Nicolas Pitre
64855973c0 arm64: speed up simple memcpy/memset alternatives
We need those simple alternatives to be used during early boot when the
MMU is not yet enabled. However they don't have to be the slowest they
can be. Those functions are mainly used to clear .bss sections and copy
.data to final destination when doing XIP, etc. Therefore it is very
likely for provided pointers to be 64-bit aligned. Let's optimize for
that case.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-14 17:11:40 -04:00
Giardina, Anthony
809c0923c6 xtensa: userspace: fix uninitialized return values in mpu_map_region_add
Ensure that *first_idx is populated for the case of adding entries
to an empty table

Signed-off-by: Anthony Giardina <anthony.giardina@intel.com>
2024-06-14 14:49:29 -04:00
Evgeniy Paltsev
ec72339d61 ARC: enable barriers for HS
As we start to use data memory barriers in SMP scheduler code
explicitly (not only internaly in the atomics implementation)
let's enable barriers for ARC HS.

Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Evgeniy Paltsev <PaltsevEvgeniy@gmail.com>
2024-06-14 15:38:39 +02:00
Yong Cong Sin
bb66d1188c arch: riscv: stacktrace: conditionally check stack_info
Check if an address is in the thread stack only when
`CONFIG_THREAD_STACK_INFO` is enabled, since otherwise the
`stack_info` will not be available.

This fixes compilation error when `CONFIG_THREAD_STACK_INFO`
is explicitly disabled.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-14 05:22:59 -04:00
Grant Ramsay
c5642a7b4d arch: kconfig: Set flash size/address to 0 by default when !XIP
Many boards/SoCs in-tree do this:
    if !XIP
    config FLASH_SIZE
        default 0
    config FLASH_BASE_ADDRESS
        default 0
    endif

And many other boards are missing this configuration (e.g. stm32 series).
Making this the default helps get non-XIP just working

Signed-off-by: Grant Ramsay <gramsay@enphaseenergy.com>
2024-06-13 20:15:35 -04:00
Nicolas Pitre
baa70d8d36 arch/arm64/mmu: fix page table reference counting part 2
Commit f7e11649fd ("arch/arm64/mmu: fix page table reference
counting") missed a case where the freeing of a table wasn't propagated
properly to all domains. To fix this, the page freeing logic was removed
from set_mapping() and a del_mapping() was created instead, to be usedby
both by remove_map() and globalize_table().

A test covering this case should definitely be created but that'll come
later. Proper operation was verified through manual debug log
inspection for now.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-13 17:03:33 -04:00
Marcin Szymczyk
0b20e2afa7 arch: riscv: skip isr.S when SW ISR table is not generated
`isr.S` depends on `CONFIG_GEN_SW_ISR_TABLE`.
Do not build it if SW ISR table is not present.

Signed-off-by: Marcin Szymczyk <marcin.szymczyk@nordicsemi.no>
2024-06-13 16:57:32 -04:00
Yong Cong Sin
61a0f9f1c0 arch: riscv: stop printing symbol name at mepc
Now that the unwind starts from mepc already, the symbol
name at the mepc reg is kinda redundant, so just remove it.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-13 16:46:48 -04:00
Yong Cong Sin
726fefd12c arch: riscv: stacktrace: implement arch_stack_walk()
Created the `arch_stack_walk()` function out from the original
`z_riscv_unwind_stack()`, it's been updated to support
unwinding any thread.

Updated the stack_unwind test case accordingly.

Increased the delay in `test_fatal_on_smp`, to wait
for the the fatal thread to be terminated, as stacktrace can
take a bit more time.

Doubled the kernel/smp testcase timeout from 60 (default) to
120s, as some of the tests can take a little bit more than 60s
to finish.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-13 16:46:48 -04:00
Yong Cong Sin
5121de6e73 arch: introduce arch_stack_walk()
An architecture can indicate that it has an implementation for
the `arch_stack_walk()` function by selecting
`ARCH_HAS_STACKWALK`.

Set the default value of `EXCEPTION_STACK_TRACE_MAX_FRAMES` to
`ARCH_STACKWALK_MAX_FRAMES` if the latter is available.

Signed-off-by: Yong Cong Sin <ycsin@meta.com>
2024-06-13 16:46:48 -04:00
Nicolas Pitre
6cc9d7ec20 tests: arm64: exercize MMU page table allocation and recycling
This code was never formally tested before... and without the preceding
commit it obviously didn't work either.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-13 05:47:24 -04:00
Nicolas Pitre
f7e11649fd arch/arm64/mmu: fix page table reference counting
Existing code confused table usage and table reference counts together.
This obviously doesn't work. A table with one reference to it and one
populated PTE is not the same as a table with 2 references to it and
no PTe in use.

So split the two concepts and adjust the code accordingly. A page needs
to have its PTE usage count drop to zero before the last reference is
released. When both counts are 0 then the page is free.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2024-06-13 05:47:24 -04:00
Daniel Leung
564ca11631 kernel: mm: rename z_page_fault() to k_mem_page_fault()
This is part of a series of move memory management related
stuff out of Z_ namespace into its own namespace.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2024-06-12 21:13:26 -04:00