Compare commits

..

No commits in common. "f9e3b65d3a9794ee2233aa88172346f887b48d04" and "19f645edd40b38e54f505135beced1919fdc7715" have entirely different histories.

5955 changed files with 50738 additions and 168462 deletions

View file

@ -6,8 +6,8 @@ labels: bug
assignees: ''
---
<!--
**Notes**
**Notes (delete this)**
Github Discussions (https://github.com/zephyrproject-rtos/zephyr/discussions)
are available to first verify that the issue is a genuine Zephyr bug and not a
consequence of Zephyr services misuse.
@ -16,10 +16,8 @@ This issue list is only for bugs in the main Zephyr code base
(https://github.com/zephyrproject-rtos/zephyr/). If the bug is for a project
fork (such as NCS) specific feature, please open an issue in the fork project
instead.
-->
**Describe the bug**
<!--
A clear and concise description of what the bug is.
Please also mention any information which could help others to understand
@ -29,43 +27,31 @@ the problem you're facing:
- Is this a regression? If yes, have you been able to "git bisect" it to a
specific commit?
- ...
-->
**To Reproduce**
<!--
Steps to reproduce the behavior:
1. mkdir build; cd build
2. cmake -DBOARD=board\_xyz
3. make
4. See error
-->
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Impact**
<!--
What impact does this issue have on your progress (e.g., annoyance, showstopper)
-->
**Logs and console output**
<!--
If applicable, add console logs or other types of debug information
e.g Wireshark capture or Logic analyzer capture (upload in zip archive).
copy-and-paste text and put a code fence (\`\`\`) before and after, to help
explain the issue. (if unable to obtain text log, add a screenshot)
-->
**Environment (please complete the following information):**
- OS: (e.g. Linux, MacOS, Windows)
- Toolchain (e.g Zephyr SDK, ...)
- Commit SHA or Version used
**Additional context**
<!--
Add any other context that could be relevant to your issue, such as pin setting,
target configuration, ...
-->

View file

@ -8,21 +8,13 @@ assignees: ''
---
**Is your enhancement proposal related to a problem? Please describe.**
<!--
A clear and concise description of what the problem is.
-->
**Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context**
<!--
Add any other context or graphics (drag-and-drop an image) about the feature request here.
-->

View file

@ -9,52 +9,43 @@ assignees: ''
## Introduction
<!--
This section targets end users, TSC members, maintainers and anyone else that might
need a quick explanation of your proposed change.
-->
### Problem description
<!--
Why do we want this change and what problem are we trying to address?
-->
### Proposed change
<!--
A brief summary of the proposed change - the 10,000 ft view on what it will
change once this change is implemented.
-->
## Detailed RFC
<!--
In this section of the document the target audience is the dev team. Upon
reading this section each engineer should have a rather clear picture of what
needs to be done in order to implement the described feature.
-->
### Proposed change (Detailed)
<!--
This section is freeform - you should describe your change in as much detail
as possible. Please also ensure to include any context or background info here.
For example, do we have existing components which can be reused or altered.
By reading this section, each team member should be able to know what exactly
you're planning to change and how.
-->
### Dependencies
<!--
Highlight how the change may affect the rest of the project (new components,
modifications in other areas), or other teams/projects.
-->
### Concerns and Unresolved Questions
<!--
List any concerns, unknowns, and generally unresolved questions etc.
-->
## Alternatives
<!--
List any alternatives considered, and the reasons for choosing this option
over them.
-->

View file

@ -8,21 +8,13 @@ assignees: ''
---
**Is your feature request related to a problem? Please describe.**
<!--
A clear and concise description of what the problem is.
-->
**Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen.
-->
**Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context**
<!--
Add any other context or graphics (drag-and-drop an image) about the feature request here.
-->

View file

@ -34,13 +34,18 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.13.20240601
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.11.20240324
options: '--entrypoint /bin/bash'
env:
ZEPHYR_TOOLCHAIN_VARIANT: zephyr
BSIM_OUT_PATH: /opt/bsim/
BSIM_COMPONENTS_PATH: /opt/bsim/components
EDTT_PATH: ../tools/edtt
bsim_bt_52_test_results_file: ./bsim_bt/52_bsim_results.xml
bsim_bt_53_test_results_file: ./bsim_bt/53_bsim_results.xml
bsim_bt_53split_test_results_file: ./bsim_bt/53_bsim_split_results.xml
bsim_net_52_test_results_file: ./bsim_net/52_bsim_results.xml
bsim_uart_test_results_file: ./bsim_uart/uart_bsim_results.xml
steps:
- name: Apply container owner mismatch workaround
run: |
@ -148,39 +153,59 @@ jobs:
- name: Run Bluetooth Tests with BSIM
if: steps.check-bluetooth-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
tests/bsim/ci.bt.sh
export ZEPHYR_BASE=${PWD}
# Build and run the BT tests for nrf52_bsim:
nice tests/bsim/bluetooth/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_bt_52_test_results_file} \
TESTS_FILE=tests/bsim/bluetooth/tests.nrf52bsim.txt tests/bsim/run_parallel.sh
# Build and run the BT controller tests also for the nrf5340bsim/nrf5340/cpunet
nice tests/bsim/bluetooth/compile.nrf5340bsim_nrf5340_cpunet.sh
BOARD=nrf5340bsim/nrf5340/cpunet \
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_bt_53_test_results_file} \
TESTS_FILE=tests/bsim/bluetooth/tests.nrf5340bsim_nrf5340_cpunet.txt \
tests/bsim/run_parallel.sh
# Build and run the nrf5340 split stack tests set
nice tests/bsim/bluetooth/compile.nrf5340bsim_nrf5340_cpuapp.sh
BOARD=nrf5340bsim/nrf5340/cpuapp \
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_bt_53split_test_results_file} \
TESTS_FILE=tests/bsim/bluetooth/tests.nrf5340bsim_nrf5340_cpuapp.txt \
tests/bsim/run_parallel.sh
- name: Run Networking Tests with BSIM
if: steps.check-networking-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
tests/bsim/ci.net.sh
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_net nice tests/bsim/net/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_net_52_test_results_file} \
SEARCH_PATH=tests/bsim/net/ tests/bsim/run_parallel.sh
- name: Run UART Tests with BSIM
if: steps.check-uart-files.outputs.any_changed == 'true' || steps.check-common-files.outputs.any_changed == 'true'
run: |
tests/bsim/ci.uart.sh
echo "UART: Single device tests"
./scripts/twister -T tests/drivers/uart/ --force-color --inline-logs -v -M -p nrf52_bsim \
--fixture gpio_loopback -- -uart0_loopback
echo "UART: Multi device tests"
export ZEPHYR_BASE=${PWD}
WORK_DIR=${ZEPHYR_BASE}/bsim_uart nice tests/bsim/drivers/uart/compile.sh
RESULTS_FILE=${ZEPHYR_BASE}/${bsim_uart_test_results_file} \
SEARCH_PATH=tests/bsim/drivers/uart/ tests/bsim/run_parallel.sh
- name: Merge Test Results
run: |
pip3 install junitparser junit2html
junitparser merge --glob "./bsim_*/*bsim_results.*.xml" "./twister-out/twister.xml" junit.xml
junit2html junit.xml junit.html
- name: Upload Unit Test Results in HTML
- name: Upload Test Results
if: always()
uses: actions/upload-artifact@v4
with:
name: HTML Unit Test Results
if-no-files-found: ignore
name: bsim-test-results
path: |
junit.html
- name: Publish Unit Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
with:
check_name: Bsim Test Results
files: "junit.xml"
comment_mode: off
./bsim_bt/52_bsim_results.xml
./bsim_bt/53_bsim_results.xml
./bsim_bt/53_bsim_split_results.xml
./bsim_net/52_bsim_results.xml
./bsim_uart/uart_bsim_results.xml
./twister-out/twister.xml
./twister-out/twister.json
${{ github.event_path }}
if-no-files-found: warn
- name: Upload Event Details
if: always()

View file

@ -12,7 +12,7 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.13.20240601
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.11.20240324
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false

View file

@ -14,7 +14,7 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.13.20240601
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.11.20240324
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false

View file

@ -1,12 +1,6 @@
name: Compliance Checks
on:
pull_request:
types:
- edited
- opened
- reopened
- synchronize
on: pull_request
jobs:
check_compliance:
@ -58,14 +52,6 @@ jobs:
west config manifest.group-filter -- +ci,-optional
west update -o=--depth=1 -n 2>&1 1> west.update.log || west update -o=--depth=1 -n 2>&1 1> west.update2.log
- name: Check for PR description
if: ${{ github.event.pull_request.body == '' }}
continue-on-error: true
id: pr_description
run: |
echo "Pull request description cannot be empty."
exit 1
- name: Run Compliance Tests
continue-on-error: true
id: compliance
@ -108,12 +94,5 @@ jobs:
done
if [ "${exit}" == "1" ]; then
echo "Compliance error, check for error messages in the \"Run Compliance Tests\" step"
echo "You can run this step locally with the ./scripts/ci/check_compliance.py script."
exit 1;
fi
if [ "${{ steps.pr_description.outcome }}" == "failure" ]; then
echo "PR description cannot be empty"
exit 1;
fi

View file

@ -27,9 +27,9 @@ jobs:
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
os: [ubuntu-22.04, macos-14, windows-2022]
os: [ubuntu-22.04, macos-11, windows-2022]
exclude:
- os: macos-14
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6

View file

@ -18,9 +18,6 @@ env:
# so we fetch that through pip.
CMAKE_VERSION: 3.20.5
DOXYGEN_VERSION: 1.9.6
# Job count is set to 2 less than the vCPU count of 16 because the total available RAM is 32GiB
# and each sphinx-build process may use more than 2GiB of RAM.
JOB_COUNT: 14
jobs:
doc-file-check:
@ -53,8 +50,6 @@ jobs:
scripts/dts/
doc/requirements.txt
.github/workflows/doc-build.yml
scripts/pylib/pytest-twister-harness/src/twister_harness/device/device_adapter.py
scripts/pylib/pytest-twister-harness/src/twister_harness/helpers/shell.py
doc-build-html:
name: "Documentation Build (HTML)"
@ -135,11 +130,7 @@ jobs:
else
DOC_TARGET="html"
fi
DOC_TAG=${DOC_TAG} \
SPHINXOPTS="-j ${JOB_COUNT} -W --keep-going -T" \
SPHINXOPTS_EXTRA="-q -t publish" \
make -C doc ${DOC_TARGET}
DOC_TAG=${DOC_TAG} SPHINXOPTS_EXTRA="-q -t publish" make -C doc ${DOC_TARGET}
# API documentation coverage
python3 -m coverxygen --xml-dir doc/_build/html/doxygen/xml/ --src-dir include/ --output doc-coverage.info
@ -217,7 +208,7 @@ jobs:
- name: install-pkgs
run: |
apt-get update
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin imagemagick
apt-get install -y python3-pip python3-venv ninja-build doxygen graphviz librsvg2-bin
- name: cache-pip
uses: actions/cache@v4
@ -252,10 +243,7 @@ jobs:
DOC_TAG="development"
fi
DOC_TAG=${DOC_TAG} \
SPHINXOPTS="-q -j ${JOB_COUNT}" \
LATEXMKOPTS="-quiet -halt-on-error" \
make -C doc pdf
DOC_TAG=${DOC_TAG} SPHINXOPTS="-q -j auto" LATEXMKOPTS="-quiet -halt-on-error" make -C doc pdf
- name: upload-build
if: always()

View file

@ -10,7 +10,7 @@ jobs:
check-errno:
runs-on: ubuntu-22.04
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.13
image: ghcr.io/zephyrproject-rtos/ci:v0.26.11
steps:
- name: Apply container owner mismatch workaround

View file

@ -26,7 +26,7 @@ jobs:
group: zephyr-runner-v2-linux-x64-4xlarge
if: github.repository_owner == 'zephyrproject-rtos'
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.13.20240601
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.11.20240324
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false

View file

@ -26,7 +26,7 @@ jobs:
strategy:
fail-fast: false
matrix:
os: [ubuntu-22.04, macos-13, macos-14, windows-2022]
os: [ubuntu-22.04, macos-12, macos-14, windows-2022]
runs-on: ${{ matrix.os }}
steps:
- name: Checkout

View file

@ -25,7 +25,7 @@ jobs:
runs-on:
group: zephyr-runner-v2-linux-x64-4xlarge
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.13.20240601
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.11.20240324
options: '--entrypoint /bin/bash'
outputs:
subset: ${{ steps.output-services.outputs.subset }}
@ -129,7 +129,7 @@ jobs:
needs: twister-build-prep
if: needs.twister-build-prep.outputs.size != 0
container:
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.13.20240601
image: ghcr.io/zephyrproject-rtos/ci-repo-cache:v0.26.11.20240324
options: '--entrypoint /bin/bash'
strategy:
fail-fast: false

View file

@ -24,7 +24,7 @@ jobs:
python-version: ['3.10', '3.11', '3.12']
os: [ubuntu-22.04]
container:
image: ghcr.io/zephyrproject-rtos/ci:v0.26.13
image: ghcr.io/zephyrproject-rtos/ci:v0.26.11
steps:
- name: Apply Container Owner Mismatch Workaround

View file

@ -30,9 +30,9 @@ jobs:
strategy:
matrix:
python-version: ['3.10', '3.11', '3.12']
os: [ubuntu-22.04, macos-14, windows-2022]
os: [ubuntu-22.04, macos-11, windows-2022]
exclude:
- os: macos-14
- os: macos-11
python-version: 3.6
- os: windows-2022
python-version: 3.6

7
.gitignore vendored
View file

@ -7,10 +7,8 @@
*.swp
*.swo
*~
# Emacs
.\#*
\#*\#
build*/
!doc/build/
!scripts/build
@ -29,8 +27,6 @@ outdir
outdir-*
scripts/basic/fixdep
scripts/gen_idt/gen_idt
coverage-report
doc-coverage.info
doc/_build
doc/doxygen
doc/xml
@ -57,7 +53,6 @@ venv
.venv
.DS_Store
.clangd
new.info
# CI output
compliance.xml

View file

@ -111,6 +111,11 @@ zephyr_library_named(zephyr)
if(CONFIG_LEGACY_GENERATED_INCLUDE_PATH)
zephyr_include_directories(${PROJECT_BINARY_DIR}/include/generated/zephyr)
message(WARNING "
Warning: CONFIG_LEGACY_GENERATED_INCLUDE_PATH is currently enabled by default
so that user applications can continue to use the legacy include paths for the
generated headers. This Kconfig will be deprecated and eventually removed in
the future releases.")
endif()
zephyr_include_directories(
@ -192,7 +197,6 @@ get_property(OPTIMIZE_FOR_NO_OPTIMIZATIONS_FLAG TARGET compiler PROPERTY no_opti
get_property(OPTIMIZE_FOR_DEBUG_FLAG TARGET compiler PROPERTY optimization_debug)
get_property(OPTIMIZE_FOR_SPEED_FLAG TARGET compiler PROPERTY optimization_speed)
get_property(OPTIMIZE_FOR_SIZE_FLAG TARGET compiler PROPERTY optimization_size)
get_property(OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG TARGET compiler PROPERTY optimization_size_aggressive)
# From kconfig choice, pick the actual OPTIMIZATION_FLAG to use.
# Kconfig choice ensures only one of these CONFIG_*_OPTIMIZATIONS is set.
@ -204,8 +208,6 @@ elseif(CONFIG_SPEED_OPTIMIZATIONS)
set(OPTIMIZATION_FLAG ${OPTIMIZE_FOR_SPEED_FLAG})
elseif(CONFIG_SIZE_OPTIMIZATIONS)
set(OPTIMIZATION_FLAG ${OPTIMIZE_FOR_SIZE_FLAG}) # Default in kconfig
elseif(CONFIG_SIZE_OPTIMIZATIONS_AGGRESSIVE)
set(OPTIMIZATION_FLAG ${OPTIMIZE_FOR_SIZE_AGGRESSIVE_FLAG})
else()
message(FATAL_ERROR
"Unreachable code. Expected optimization level to have been chosen. See Kconfig.zephyr")
@ -1710,8 +1712,9 @@ if(CONFIG_BUILD_OUTPUT_BIN AND CONFIG_BUILD_OUTPUT_UF2)
set(BYPRODUCT_KERNEL_UF2_NAME "${PROJECT_BINARY_DIR}/${KERNEL_UF2_NAME}" CACHE FILEPATH "Kernel uf2 file" FORCE)
endif()
set(KERNEL_META_PATH ${PROJECT_BINARY_DIR}/${KERNEL_META_NAME} CACHE INTERNAL "")
if(CONFIG_BUILD_OUTPUT_META)
set(KERNEL_META_PATH ${PROJECT_BINARY_DIR}/${KERNEL_META_NAME} CACHE INTERNAL "")
list(APPEND
post_build_commands
COMMAND ${PYTHON_EXECUTABLE} ${ZEPHYR_BASE}/scripts/zephyr_module.py
@ -1725,9 +1728,6 @@ if(CONFIG_BUILD_OUTPUT_META)
post_build_byproducts
${KERNEL_META_PATH}
)
else(CONFIG_BUILD_OUTPUT_META)
# Prevent spdx to use invalid data
file(REMOVE ${KERNEL_META_PATH})
endif()
# Cleanup intermediate files
@ -1888,20 +1888,6 @@ if(CONFIG_BUILD_OUTPUT_INFO_HEADER)
)
endif()
if (CONFIG_LLEXT AND CONFIG_LLEXT_EXPORT_BUILTINS_BY_SLID)
#slidgen must be the first post-build command to be executed
#on the Zephyr ELF to ensure that all other commands, such as
#binary file generation, are operating on a preparated ELF.
list(PREPEND
post_build_commands
COMMAND ${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/build/llext_prepare_exptab.py
--elf-file ${PROJECT_BINARY_DIR}/${KERNEL_ELF_NAME}
--slid-listing ${PROJECT_BINARY_DIR}/slid_listing.txt
)
endif()
if(NOT CMAKE_C_COMPILER_ID STREQUAL "ARMClang")
set(check_init_priorities_input
$<IF:$<TARGET_EXISTS:native_runner_executable>,${BYPRODUCT_KERNEL_EXE_NAME},${BYPRODUCT_KERNEL_ELF_NAME}>
@ -1988,39 +1974,22 @@ elseif(CONFIG_LOG_MIPI_SYST_USE_CATALOG)
endif()
if(LOG_DICT_DB_NAME_ARG)
set(log_dict_gen_command
if (NOT CONFIG_LOG_DICTIONARY_DB_TARGET)
set(LOG_DICT_DB_ALL_TARGET ALL)
endif()
add_custom_command(
OUTPUT ${LOG_DICT_DB_NAME}
COMMAND
${PYTHON_EXECUTABLE}
${ZEPHYR_BASE}/scripts/logging/dictionary/database_gen.py
${KERNEL_ELF_NAME}
${LOG_DICT_DB_NAME_ARG}=${LOG_DICT_DB_NAME}
--build-header ${PROJECT_BINARY_DIR}/include/generated/zephyr/version.h
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
COMMENT "Generating logging dictionary database: ${LOG_DICT_DB_NAME}"
DEPENDS ${logical_target_for_zephyr_elf}
)
if (NOT CONFIG_LOG_DICTIONARY_DB_TARGET)
# If not using a separate target for generating logging dictionary
# database, add the generation to post build command to make sure
# the database is actually being generated.
list(APPEND
post_build_commands
COMMAND ${CMAKE_COMMAND} -E echo "Generating logging dictionary database: ${LOG_DICT_DB_NAME}"
COMMAND ${log_dict_gen_command}
)
list(APPEND
post_build_byproducts
${LOG_DICT_DB_NAME}
)
else()
# Seprate build target for generating logging dictionary database.
# This needs to be explicitly called/used to generate the database.
add_custom_command(
OUTPUT ${LOG_DICT_DB_NAME}
COMMAND ${log_dict_gen_command}
WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
COMMENT "Generating logging dictionary database: ${LOG_DICT_DB_NAME}"
DEPENDS ${logical_target_for_zephyr_elf}
)
add_custom_target(log_dict_db_gen DEPENDS ${LOG_DICT_DB_NAME})
endif()
add_custom_target(log_dict_db_gen ${LOG_DICT_DB_ALL_TARGET} DEPENDS ${LOG_DICT_DB_NAME})
endif()
# Add post_build_commands to post-process the final .elf file produced by
@ -2160,8 +2129,9 @@ add_custom_command(
COMMAND ${CMAKE_COMMAND}
-DPROJECT_BINARY_DIR=${PROJECT_BINARY_DIR}
-DAPPLICATION_SOURCE_DIR=${APPLICATION_SOURCE_DIR}
-DINTERFACE_INCLUDE_DIRECTORIES="$<TARGET_PROPERTY:zephyr_interface,INTERFACE_INCLUDE_DIRECTORIES>"
-DINTERFACE_INCLUDE_DIRECTORIES="$<JOIN:$<TARGET_PROPERTY:zephyr_interface,INTERFACE_INCLUDE_DIRECTORIES>,:>"
-Dllext_edk_file=${llext_edk_file}
-DAUTOCONF_H=${AUTOCONF_H}
-Dllext_cflags="${llext_edk_cflags}"
-Dllext_edk_name=${CONFIG_LLEXT_EDK_NAME}
-DWEST_TOPDIR=${WEST_TOPDIR}

View file

@ -26,6 +26,8 @@
/soc/arm/infineon_xmc/ @parthitce
/soc/arm/silabs_exx32/efm32pg1b/ @rdmeneze
/soc/arm/silabs_exx32/efr32mg21/ @l-alfred
/soc/arm/st_stm32/ @erwango
/soc/arm/st_stm32/*/power.c @FRASTM
/soc/arm/st_stm32/stm32mp1/ @arnopo
/soc/arm/st_stm32/stm32h7/*stm32h735* @benediktibk
/soc/arm/st_stm32/stm32l4/*stm32l451* @benediktibk
@ -53,6 +55,8 @@
/boards/arm/acn52832/ @sven-hm
/boards/arm/arduino_mkrzero/ @soburi
/boards/arm/bbc_microbit_v2/ @LingaoM
/boards/arm/bl5340_dvk/ @lairdjm
/boards/arm/bl65*/ @lairdjm
/boards/arm/blackpill_f401ce/ @coderkalyan
/boards/arm/blackpill_f411ce/ @coderkalyan
/boards/arm/bt*10/ @greg-leach
@ -63,6 +67,7 @@
/boards/arm/cy8ckit_062s4/ @DaWei8823
/boards/arm/cy8ckit_062_wifi_bt/ @ifyall @npal-cy
/boards/arm/cy8cproto_062_4343w/ @ifyall @npal-cy
/boards/arm/disco_l475_iot1/ @erwango
/boards/arm/efm32pg_stk3401a/ @rdmeneze
/boards/arm/faze/ @mbittan @simonguinot
/boards/arm/frdm*/ @mmahadevan108 @dleach02
@ -72,6 +77,7 @@
/boards/arm/ip_k66f/ @parthitce @lmajewski
/boards/arm/legend/ @mbittan @simonguinot
/boards/arm/lpcxpresso*/ @mmahadevan108 @dleach02
/boards/arm/mg100/ @rerickson1
/boards/arm/mimx8mm_evk/ @Mani-Sadhasivam
/boards/arm/mimx8mm_phyboard_polis @pefech
/boards/arm/mimxrt*/ @mmahadevan108 @dleach02
@ -79,8 +85,10 @@
/boards/arm/msp_exp432p401r_launchxl/ @Mani-Sadhasivam
/boards/arm/npcx7m6fb_evb/ @MulinChao @ChiHuaL
/boards/arm/nrf*/ @carlescufi @lemrey
/boards/arm/nucleo*/ @erwango @ABOSTM @FRASTM
/boards/arm/nucleo_f401re/ @idlethread
/boards/arm/nuvoton_pfm_m487/ @ssekar15
/boards/arm/pinnacle_100_dvk/ @rerickson1
/boards/arm/qemu_cortex_a9/ @ibirnbaum
/boards/arm/qemu_cortex_r*/ @stephanosio
/boards/arm/qemu_cortex_m*/ @ioannisg @stephanosio
@ -98,13 +106,14 @@
/boards/arm/sensortile_box/ @avisconti
/boards/arm/steval_fcu001v1/ @Navin-Sankar
/boards/arm/stm32l1_disco/ @karlp
/boards/arm/stm32*_disco/ @erwango @ABOSTM @FRASTM
/boards/arm/stm32h735g_disco/ @benediktibk
/boards/arm/stm32f3_disco/ @ydamigos
/boards/arm/stm32*_eval/ @erwango @ABOSTM @FRASTM
/boards/arm/rcar_*/ @aaillet
/boards/arm/ubx_bmd345eval_nrf52840/ @Navin-Sankar @brec-u-blox
/boards/arm/nrf5340_audio_dk_nrf5340 @koffes @alexsven @erikrobstad @rick1082 @gWacey
/boards/arm/stm32_min_dev/ @sidcha
/boards/ezurio/* @rerickson1
/boards/riscv/rv32m1_vega/ @dleach02
/boards/riscv/adp_xc7k_ae350/ @cwshu @kevinwang821020 @jimmyzhe
/boards/riscv/longan_nano/ @soburi
@ -143,6 +152,7 @@
/drivers/*/*cc13xx_cc26xx* @bwitherspoon
/drivers/*/*gd32* @nandojve
/drivers/*/*mcux* @mmahadevan108 @dleach02
/drivers/*/*stm32* @erwango @ABOSTM @FRASTM
/drivers/*/*native_posix* @aescolar @daor-oti
/drivers/*/*lpc11u6x* @mbittan @simonguinot
/drivers/*/*npcx* @MulinChao @ChiHuaL
@ -221,6 +231,7 @@
/drivers/gpio/*b91* @andy-liu-telink
/drivers/gpio/*lmp90xxx* @henrikbrixandersen
/drivers/gpio/*nct38xx* @MulinChao @ChiHuaL
/drivers/gpio/*stm32* @erwango
/drivers/gpio/*eos_s3* @fkokosinski @kgugala
/drivers/gpio/*rcar* @aaillet
/drivers/gpio/*esp32* @sylvioalves
@ -346,6 +357,7 @@
/drivers/serial/uart_ite_it8xxx2.c @GTLin08
/drivers/serial/*intel_lw* @shilinte
/drivers/serial/*kb1200* @ene-steven
/drivers/disk/sdmmc_sdhc.h @JunYangNXP
/drivers/disk/sdmmc_stm32.c @anthonybrandon
/drivers/ptp_clock/ @tbursztyka @jukkar
/drivers/spi/*b91* @andy-liu-telink
@ -362,6 +374,7 @@
/drivers/timer/*xlnx_psttc* @wjliang @stephanosio
/drivers/timer/*cc13xx_cc26xx_rtc* @vanti
/drivers/timer/*cavs* @dcpleung
/drivers/timer/*stm32_lptim* @FRASTM
/drivers/timer/*leon_gptimer* @julius-barendt
/drivers/timer/*mips_cp0* @frantony
/drivers/timer/*rcar_cmt* @aaillet
@ -411,6 +424,7 @@
/dts/arm64/renesas/ @lorc @xakep-amatop
/dts/arm/quicklogic/ @fkokosinski @kgugala
/dts/arm/seeed_studio/ @str4t0m
/dts/arm/st/ @erwango
/dts/arm/st/h7/*stm32h735* @benediktibk
/dts/arm/st/l4/*stm32l451* @benediktibk
/dts/arm/ti/cc13?2* @bwitherspoon
@ -464,6 +478,7 @@
/dts/bindings/*/nxp*s32* @manuargue
/dts/bindings/*/openisa* @dleach02
/dts/bindings/*/raspberrypi*pico* @yonsch
/dts/bindings/*/st* @erwango
/dts/bindings/sensor/ams* @alexanderwachter
/dts/bindings/*/sifive* @mateusz-holenko @kgugala @pgielda
/dts/bindings/*/andes* @cwshu @kevinwang821020 @jimmyzhe

View file

@ -478,7 +478,6 @@ choice COMPILER_OPTIMIZATIONS
prompt "Optimization level"
default NO_OPTIMIZATIONS if COVERAGE
default DEBUG_OPTIMIZATIONS if DEBUG
default SIZE_OPTIMIZATIONS_AGGRESSIVE if "$(ZEPHYR_TOOLCHAIN_VARIANT)" = "llvm"
default SIZE_OPTIMIZATIONS
help
Note that these flags shall only control the compiler
@ -491,12 +490,6 @@ config SIZE_OPTIMIZATIONS
Compiler optimizations will be set to -Os independently of other
options.
config SIZE_OPTIMIZATIONS_AGGRESSIVE
bool "Aggressively optimize for size"
help
Compiler optimizations wil be set to -Oz independently of other
options.
config SPEED_OPTIMIZATIONS
bool "Optimize for speed"
help

View file

@ -119,7 +119,6 @@ ACPI:
- lib/acpi/
- include/zephyr/acpi/
- tests/lib/acpi/
- dts/bindings/acpi/
labels:
- "area: ACPI"
tests:
@ -137,7 +136,6 @@ ARC arch:
- include/zephyr/arch/arc/
- tests/arch/arc/
- dts/arc/synopsys/
- dts/bindings/arc/
- doc/hardware/arch/arc-support-status.rst
labels:
- "area: ARC"
@ -201,7 +199,6 @@ ARM Platforms:
- soc/arm/designstart/
- soc/arm/fvp_aemv8*/
- dts/arm/armv*.dtsi
- dts/bindings/arm/arm*.yaml
labels:
- "platform: ARM"
@ -240,11 +237,10 @@ MIPS arch:
- arch.mips
Ambiq Platforms:
status: maintained
maintainers:
- AlessandroLuo
status: odd fixes
collaborators:
- aaronyegx
- AlessandroLuo
- RichardSWheatley
files:
- soc/ambiq/
@ -304,6 +300,7 @@ Bluetooth:
collaborators:
- hermabe
- Vudentz
- Thalley
- asbjornsabo
- sjanc
files:
@ -398,9 +395,6 @@ Bluetooth Host:
- subsys/bluetooth/shell/
- tests/bluetooth/host*/
- tests/bsim/bluetooth/host/
files-exclude:
- subsys/bluetooth/host/classic/
- include/zephyr/bluetooth/classic/
labels:
- "area: Bluetooth Host"
- "area: Bluetooth"
@ -443,23 +437,19 @@ Bluetooth Audio:
- kruithofa
- larsgk
- pin-zephyr
- niym-ot
- jthm-ot
files:
- subsys/bluetooth/audio/
- include/zephyr/bluetooth/audio/
- tests/bluetooth/audio/
- tests/bsim/bluetooth/audio/
- tests/bsim/bluetooth/audio_samples/
- tests/bluetooth/shell/audio.conf
- tests/bluetooth/tester/overlay-le-audio.conf
- tests/bluetooth/tester/src/audio/
- doc/connectivity/bluetooth/api/audio/
- samples/bluetooth/bap*/
- samples/bluetooth/cap*/
- samples/bluetooth/broadcast_audio*/
- samples/bluetooth/hap*/
- samples/bluetooth/pbp*/
- samples/bluetooth/public_broadcast*/
- samples/bluetooth/tmap*/
- samples/bluetooth/unicast_audio*/
labels:
- "area: Bluetooth Audio"
- "area: Bluetooth"
@ -478,7 +468,6 @@ Bluetooth Classic:
- include/zephyr/bluetooth/classic/
labels:
- "area: Bluetooth Classic"
- "area: Bluetooth"
tests:
- bluetooth
@ -601,22 +590,6 @@ CMSIS API layer:
- portability.cmsis_rtos_v1
- portability.cmsis_rtos_v2
DAP:
status: maintained
maintainers:
- jfischer-no
collaborators:
- maxd-nordic
files:
- include/zephyr/drivers/swdp.h
- drivers/dp/
- subsys/dap/
- samples/subsys/dap/
description: >-
Debug Access Port controller
labels:
- "area: dap"
DSP subsystem:
status: maintained
maintainers:
@ -741,20 +714,6 @@ Debug:
tests:
- debug
"Debug: Symtab":
status: maintained
maintainers:
- ycsin
files:
- include/zephyr/debug/symtab.h
- subsys/debug/symtab/
- tests/subsys/debug/symtab/
- scripts/build/gen_symtab.py
labels:
- "area: Symtab"
tests:
- debug.symtab
Demand Paging:
status: maintained
maintainers:
@ -773,8 +732,8 @@ Device Driver Model:
status: maintained
maintainers:
- gmarull
- tbursztyka
collaborators:
- tbursztyka
- dcpleung
- nashif
files:
@ -805,10 +764,11 @@ DFU:
- dfu
Devicetree:
status: odd fixes
status: maintained
maintainers:
- galak
collaborators:
- decsny
- galak
files:
- scripts/dts/
- dts/common/
@ -816,7 +776,6 @@ Devicetree:
- doc/build/dts/
- include/zephyr/devicetree/
- scripts/kconfig/kconfigfunctions.py
- doc/build/kconfig/preprocessor-functions.rst
- include/zephyr/devicetree.h
files-exclude:
- dts/common/nordic/
@ -826,11 +785,13 @@ Devicetree:
- libraries.devicetree
Devicetree Bindings:
status: odd fixes
status: maintained
maintainers:
- galak
collaborators:
- decsny
- galak
files:
- dts/bindings/
- include/zephyr/dt-bindings/
- dts/binding-template.yaml
labels:
@ -870,14 +831,10 @@ Display drivers:
- include/zephyr/drivers/display.h
- subsys/fb/
- samples/subsys/display/
- tests/subsys/display/
- doc/hardware/peripherals/display/
- tests/drivers/*/display/
labels:
- "area: Display"
tests:
- display.cfb
- drivers.display
Documentation:
status: maintained
@ -910,7 +867,6 @@ Documentation:
files-exclude:
- doc/releases/migration-guide-*
- doc/releases/release-notes-*
- doc/develop/test/
labels:
- "area: Documentation"
@ -959,7 +915,6 @@ Release Notes:
- doc/hardware/peripherals/adc.rst
- tests/drivers/build_all/adc/
- include/zephyr/dt-bindings/adc/
- dts/bindings/adc/
labels:
- "area: ADC"
tests:
@ -1036,7 +991,6 @@ Release Notes:
- samples/modules/canopennode/
- samples/net/sockets/can/
- samples/subsys/canbus/
- scripts/west_commands/runners/canopen_program.py
- subsys/canbus/
- subsys/net/l2/canbus/
- tests/drivers/build_all/can/
@ -1054,7 +1008,7 @@ Release Notes:
maintainers:
- rriveramcrus
collaborators:
- RobertZ2011
- GRobertZieba
files:
- drivers/charger/
- dts/bindings/charger/
@ -1234,7 +1188,7 @@ Release Notes:
- samples/drivers/eeprom/
- tests/drivers/eeprom/
- tests/drivers/*/eeprom/
- doc/hardware/peripherals/eeprom/
- doc/hardware/peripherals/eeprom.rst
labels:
- "area: EEPROM"
tests:
@ -1281,7 +1235,6 @@ Release Notes:
collaborators:
- decsny
- lmajewski
- pdgendt
files:
- drivers/ethernet/
- include/zephyr/dt-bindings/ethernet/
@ -1369,7 +1322,7 @@ Release Notes:
"Drivers: GNSS":
status: maintained
maintainers:
- bjarki-andreasen
- bjarki-trackunit
collaborators:
- tomi-font
- fabiobaltieri
@ -1518,7 +1471,6 @@ Release Notes:
- drivers/mdio/
- include/zephyr/drivers/mdio.h
- tests/drivers/build_all/mdio/
- dts/bindings/mdio/
labels:
- "area: MDIO"
tests:
@ -1539,26 +1491,6 @@ Release Notes:
tests:
- drivers.mipi_dsi
"Drivers: MSPI":
status: maintained
maintainers:
- swift-tk
files:
- drivers/mspi/
- drivers/memc/*mspi*
- drivers/flash/*mspi*
- include/zephyr/drivers/mspi.h
- include/zephyr/drivers/mspi/
- samples/drivers/mspi/
- tests/drivers/mspi/
- doc/hardware/peripherals/mspi.rst
- dts/bindings/mspi/
- dts/bindings/mtd/mspi*
labels:
- "area: MSPI"
tests:
- drivers.mspi
"Drivers: Reset":
status: odd fixes
collaborators:
@ -1566,7 +1498,6 @@ Release Notes:
files:
- drivers/reset/
- include/zephyr/drivers/reset.h
- dts/bindings/reset/
"Interrupt Handling":
status: odd fixes
@ -1636,7 +1567,6 @@ Release Notes:
- tests/drivers/led/
- doc/hardware/peripherals/led.rst
- tests/drivers/build_all/led/
- dts/bindings/led/
labels:
- "area: LED"
tests:
@ -1645,9 +1575,9 @@ Release Notes:
"Drivers: LED Strip":
status: maintained
maintainers:
- mbolivar-ampere
- simonguinot
collaborators:
- mbolivar-ampere
- soburi
- thedjnK
files:
@ -1705,7 +1635,6 @@ Release Notes:
- tests/drivers/regulator/
- tests/drivers/build_all/regulator/
- doc/hardware/peripherals/regulators.rst
- dts/bindings/regulator/
labels:
- "area: Regulators"
tests:
@ -1721,7 +1650,6 @@ Release Notes:
- include/zephyr/drivers/retained_mem.h
- tests/drivers/retained_mem/
- doc/hardware/peripherals/retained_mem.rst
- dts/bindings/retained_mem/
labels:
- "area: Retained Memory"
tests:
@ -1730,7 +1658,7 @@ Release Notes:
"Drivers: RTC":
status: maintained
maintainers:
- bjarki-andreasen
- bjarki-trackunit
files:
- drivers/rtc/
- include/zephyr/drivers/rtc/
@ -1738,7 +1666,6 @@ Release Notes:
- doc/hardware/peripherals/rtc.rst
- include/zephyr/drivers/rtc.h
- tests/drivers/build_all/rtc/
- dts/bindings/rtc/
labels:
- "area: RTC"
tests:
@ -1752,7 +1679,6 @@ Release Notes:
- drivers/pcie/
- include/zephyr/drivers/pcie/
- doc/hardware/peripherals/pcie.rst
- dts/bindings/pcie/
labels:
- "area: PCI"
@ -1767,7 +1693,6 @@ Release Notes:
- include/zephyr/drivers/peci.h
- samples/drivers/peci/
- doc/hardware/peripherals/peci.rst
- dts/bindings/peci/
labels:
- "area: PECI"
tests:
@ -1825,7 +1750,6 @@ Release Notes:
- include/zephyr/drivers/pm_cpu_ops/
- include/zephyr/drivers/pm_cpu_ops.h
- include/zephyr/arch/arm64/arm-smccc.h
- dts/bindings/pm_cpu_ops/
labels:
- "area: PM CPU ops"
@ -1904,7 +1828,7 @@ Release Notes:
- dts/bindings/sensor/
- include/zephyr/drivers/sensor/
- include/zephyr/dt-bindings/sensor/
- doc/hardware/peripherals/sensor/
- doc/hardware/peripherals/sensor.rst
- tests/drivers/build_all/sensor/
labels:
- "area: Sensors"
@ -1937,7 +1861,6 @@ Release Notes:
- drivers/spi/
- include/zephyr/drivers/spi.h
- tests/drivers/spi/
- dts/bindings/spi/
- doc/hardware/peripherals/spi.rst
labels:
- "area: SPI"
@ -1953,7 +1876,6 @@ Release Notes:
files:
- drivers/timer/
- include/zephyr/drivers/timer/
- dts/bindings/timer/
labels:
- "area: Timer"
@ -1967,7 +1889,6 @@ Release Notes:
- include/zephyr/drivers/video-controls.h
- doc/hardware/peripherals/video.rst
- tests/drivers/*/video/
- dts/bindings/video/
labels:
- "area: Video"
tests:
@ -2020,7 +1941,6 @@ Release Notes:
- krish2718
files:
- drivers/wifi/
- dts/bindings/wifi/
labels:
- "area: Wi-Fi"
@ -2106,7 +2026,6 @@ Xen Platform:
- arch/arm64/core/xen/
- soc/xen/
- boards/xen/
- dts/bindings/xen/
labels:
- "area: Xen Platform"
@ -2122,7 +2041,6 @@ Filesystems:
- samples/subsys/fs/
- subsys/fs/
- tests/subsys/fs/
- dts/bindings/fs/
labels:
- "area: File System"
tests:
@ -2221,7 +2139,6 @@ IPC:
- subsys/ipc/
- tests/subsys/ipc/
- doc/services/ipc/
- dts/bindings/ipc/
description: >-
Inter-Processor Communication
labels:
@ -2393,16 +2310,16 @@ Memory Management:
tests:
- mem_mgmt
Ezurio platforms:
Laird Connectivity platforms:
status: maintained
maintainers:
- rerickson1
collaborators:
- greg-leach
files:
- boards/ezurio/
- boards/lairdconnect/
labels:
- "platform: Ezurio"
- "platform: Laird Connectivity"
Linker Scripts:
status: maintained
@ -2497,7 +2414,6 @@ Mbed TLS:
- ceolin
collaborators:
- ithinuel
- valeriosetti
files:
- tests/crypto/mbedtls/
- tests/benchmarks/mbedtls/
@ -2545,7 +2461,7 @@ Modbus:
Modem:
status: maintained
maintainers:
- bjarki-andreasen
- bjarki-trackunit
collaborators:
- tomi-font
files:
@ -2654,11 +2570,9 @@ Networking:
files-exclude:
- doc/connectivity/networking/api/gptp.rst
- doc/connectivity/networking/api/ieee802154.rst
- doc/connectivity/networking/api/ptp.rst
- doc/connectivity/networking/api/wifi.rst
- include/zephyr/net/gptp.h
- include/zephyr/net/ieee802154*.h
- include/zephyr/net/ptp.h
- include/zephyr/net/wifi*.h
- include/zephyr/net/buf.h
- include/zephyr/net/dhcpv4*.h
@ -2674,7 +2588,6 @@ Networking:
- subsys/net/lib/coap/
- subsys/net/lib/config/ieee802154*
- subsys/net/lib/lwm2m/
- subsys/net/lib/ptp/
- subsys/net/lib/tls_credentials/
- subsys/net/lib/dhcpv4/
- tests/net/dhcpv4/
@ -2830,20 +2743,6 @@ Networking:
tests:
- net.mqtt_sn
"Networking: PTP":
status: maintained
maintainers:
- awojasinski
files:
- doc/connectivity/networking/api/ptp.rst
- include/zephyr/net/ptp.h
- subsys/net/lib/ptp/
- samples/net/ptp/
labels:
- "area: Networking"
tests:
- sample.net.ptp
"Networking: Native IEEE 802.15.4":
status: maintained
maintainers:
@ -2944,13 +2843,7 @@ Open AMP:
- carlocaione
files:
- lib/open-amp/
- samples/subsys/ipc/openamp/
- samples/subsys/ipc/openamp_rsc_table/
- samples/subsys/ipc/rpmsg_service/
labels:
- "area: Open AMP"
tests:
- sample.ipc.openamp
POSIX API layer:
status: maintained
@ -3015,8 +2908,6 @@ RISCV arch:
- ycsin
files:
- arch/riscv/
- boards/enjoydigital/litex_vexriscv/
- boards/lowrisc/opentitan_earlgrey/
- boards/qemu/riscv*/
- boards/sifive/
- boards/sparkfun/red_v_things_plus/
@ -3074,12 +2965,10 @@ Sensor Subsystem:
- doc/services/sensing/
- subsys/sensing/
- samples/subsys/sensing/
- tests/subsys/sensing/
labels:
- "area: Sensor Subsystem"
tests:
- sample.sensing
- sensing.api
Stats:
status: odd fixes
@ -3096,6 +2985,7 @@ Twister:
collaborators:
- PerMac
- hakehuang
- gopiotr
- golowanow
- gchwier
- LukaszMrugala
@ -3191,13 +3081,11 @@ State machine framework:
- sambhurst
collaborators:
- keith-zephyr
- glenn-andrews
files:
- doc/services/smf/
- include/zephyr/smf.h
- lib/smf/
- tests/lib/smf/
- samples/subsys/smf/
labels:
- "area: State Machine Framework"
tests:
@ -3212,18 +3100,16 @@ ADI Platforms:
- microbuilder
files:
- boards/adi/
- drivers/*/*max*
- drivers/*/max*
- drivers/*/*max*/
- drivers/dac/dac_ltc*
- drivers/ethernet/eth_adin*
- drivers/mdio/mdio_adin*
- drivers/regulator/regulator_adp5360*
- drivers/sensor/adi/
- dts/arm/adi/
- dts/bindings/*/adi,*
- dts/bindings/*/lltc,*
- dts/bindings/*/maxim,*
- soc/adi/
labels:
- "platform: ADI"
@ -3271,8 +3157,6 @@ Synopsys Platforms:
- scripts/west_commands/tests/test_mdb.py
- scripts/west_commands/runners/nsim.py
- cmake/emu/nsim.cmake
- drivers/serial/uart_hostlink.c
- drivers/serial/Kconfig.hostlink
labels:
- "platform: Synopsys"
@ -3334,13 +3218,8 @@ Raspberry Pi Pico Platforms:
labels:
- "platform: Raspberry Pi Pico"
Silabs Platforms:
status: maintained
maintainers:
- jhedberg
collaborators:
- jerome-pouiller
- asmellby
SiLabs Platforms:
status: odd fixes
files:
- soc/silabs/
- boards/silabs/
@ -3348,7 +3227,7 @@ Silabs Platforms:
- dts/bindings/*/silabs*
- drivers/*/*_gecko*
labels:
- "platform: Silabs"
- "platform: SiLabs"
Intel Platforms (X86):
status: maintained
@ -3443,7 +3322,6 @@ NXP Drivers:
- decsny
- manuargue
- dbaluta
- MarkWangChinese
files:
- drivers/*/*imx*
- drivers/*/*lpc*.c
@ -3575,20 +3453,6 @@ Microchip MEC Platforms:
labels:
- "platform: Microchip MEC"
Microchip RISC-V Platforms:
status: maintained
maintainers:
- fkokosinski
- kgugala
- tgorochowik
files:
- boards/microchip/m2gl025_miv/
- boards/microchip/mpfs_icicle/
- dts/riscv/microchip/
- soc/microchip/miv/
labels:
- "platform: Microchip RISC-V"
Microchip SAM Platforms:
status: maintained
maintainers:
@ -3651,8 +3515,6 @@ Renesas SmartBond Platforms:
- ioannis-karachalios
- andrzej-kaczmarek
- blauret
collaborators:
- ydamigos
files:
- boards/renesas/da14*/
- drivers/*/*smartbond*
@ -3670,10 +3532,6 @@ Renesas RA Platforms:
status: maintained
maintainers:
- soburi
- KhiemNguyenT
collaborators:
- duynguyenxa
- thaoluonguw
files:
- boards/arduino/uno_r4/
- drivers/*/*renesas_ra*
@ -3732,10 +3590,10 @@ STM32 Platforms:
maintainers:
- erwango
collaborators:
- ajarmouni-st
- FRASTM
- gautierg-st
- GeorgeCGV
- marwaiehm-st
files:
- boards/st/
- drivers/*/*stm32*.c
@ -3915,7 +3773,6 @@ RTIO:
- teburd
collaborators:
- yperess
- ubieda
files:
- samples/subsys/rtio/
- include/zephyr/rtio/
@ -4013,7 +3870,6 @@ TF-M Integration:
collaborators:
- Vge0rge
- ithinuel
- valeriosetti
files:
- samples/tfm_integration/
- modules/trusted-firmware-m/
@ -4048,7 +3904,6 @@ TF-M Integration:
files:
- cmake/*/arcmwdt/
- include/zephyr/toolchain/mwdt.h
- include/zephyr/linker/linker-tool-mwdt.h
- lib/libc/arcmwdt/*
labels:
- "area: Toolchains"
@ -4111,6 +3966,7 @@ USB:
- tests/drivers/usb/
- tests/drivers/udc/
- doc/connectivity/usb/
- scripts/generate_usb_vif/
labels:
- "area: USB"
tests:
@ -4132,7 +3988,6 @@ USB-C:
- subsys/usb/usb_c/
- doc/connectivity/usb/pd/
- doc/hardware/peripherals/usbc_vbus.rst
- scripts/generate_usb_vif/
labels:
- "area: USB-C"
tests:
@ -4184,9 +4039,10 @@ VFS:
- filesystem
West:
status: odd fixes
collaborators:
status: maintained
maintainers:
- mbolivar-ampere
collaborators:
- carlescufi
- swinslow
files:
@ -4331,7 +4187,7 @@ West:
files:
- modules/cmsis/
labels:
- "area: CMSIS-Core"
- "area: ARM"
"West project: cmsis-dsp":
status: maintained
@ -4448,7 +4304,7 @@ West:
- drivers/misc/ethos_u/
- modules/hal_ethos_u/
labels:
- "platform: ARM"
- "area: ARM"
"West project: hal_gigadevice":
status: maintained
@ -4553,10 +4409,6 @@ West:
collaborators:
- blauret
- andrzej-kaczmarek
- ydamigos
- soburi
- duynguyenxa
- thaoluonguw
files: []
labels:
- "platform: Renesas"
@ -4571,19 +4423,15 @@ West:
- "platform: Raspberry Pi Pico"
"West project: hal_silabs":
status: maintained
maintainers:
- jhedberg
status: odd fixes
collaborators:
- jerome-pouiller
- asmellby
- sateeshkotapati
- yonsch
- mnkp
files:
- modules/Kconfig.silabs
labels:
- "platform: Silabs"
- "platform: SiLabs"
"West project: hal_st":
status: maintained
@ -4600,8 +4448,9 @@ West:
- erwango
collaborators:
- FRASTM
- ABOSTM
- gautierg-st
- marwaiehm-st
- Desvauxm-st
files:
- modules/Kconfig.stm32
labels:
@ -4723,7 +4572,6 @@ West:
- ceolin
collaborators:
- ithinuel
- valeriosetti
files:
- modules/mbedtls/
labels:
@ -4738,7 +4586,7 @@ West:
- nordicjm
files:
- modules/Kconfig.mcuboot
- tests/boot/
- tests/boot/test_mcuboot/
labels:
- "area: MCUBoot"
@ -5022,7 +4870,6 @@ Continuous Integration:
files:
- .github/
- scripts/ci/
- scripts/make_bugs_pickle.py
- .checkpatch.conf
- scripts/gitlint/
- scripts/set_assignees.py
@ -5044,7 +4891,7 @@ Test Framework (Ztest):
- tests/unit/util/
- tests/subsys/testsuite/
- samples/subsys/testsuite/
- doc/develop/test/
- doc/develop/test/ztest.rst
labels:
- "area: Testsuite"
tests:
@ -5086,7 +4933,7 @@ Random:
# This area is to be converted to a subarea
Testing with Renode:
status: odd fixes
status: maintained
collaborators:
- mateusz-holenko
- fkokosinski
@ -5126,6 +4973,6 @@ zbus:
- subsys/llext/
- doc/services/llext/
labels:
- "area: llext"
- "area: Linkable Loadable Extensions"
tests:
- llext

View file

@ -1 +1 @@
0.16.8
0.16.5-1

View file

@ -1,5 +1,5 @@
VERSION_MAJOR = 3
VERSION_MINOR = 7
PATCHLEVEL = 0
VERSION_MINOR = 6
PATCHLEVEL = 99
VERSION_TWEAK = 0
EXTRAVERSION = rc3
EXTRAVERSION =

View file

@ -24,7 +24,6 @@ config ARC
imply XIP
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_SUPPORTS_ROM_START
select ARCH_HAS_DIRECTED_IPIS
help
ARC architecture
@ -51,7 +50,6 @@ config ARM64
select USE_SWITCH_SUPPORTED
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select BARRIER_OPERATIONS_ARCH
select ARCH_HAS_DIRECTED_IPIS
help
ARM64 (AArch64) architecture
@ -110,15 +108,13 @@ config RISCV
bool
select ARCH_IS_SET
select ARCH_SUPPORTS_COREDUMP
select ARCH_SUPPORTS_ROM_START if !SOC_FAMILY_ESPRESSIF_ESP32
select ARCH_SUPPORTS_ROM_START if !SOC_SERIES_ESP32C3
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_THREAD_LOCAL_STORAGE
select ARCH_HAS_STACKWALK
select IRQ_OFFLOAD_NESTED if IRQ_OFFLOAD
select USE_SWITCH_SUPPORTED
select USE_SWITCH
select SCHED_IPI_SUPPORTED if SMP
select ARCH_HAS_DIRECTED_IPIS
select BARRIER_OPERATIONS_BUILTIN
imply XIP
help
@ -133,8 +129,6 @@ config XTENSA
select ARCH_HAS_CODE_DATA_RELOCATION
select ARCH_HAS_TIMING_FUNCTIONS
select ARCH_MEM_DOMAIN_DATA if USERSPACE
select ARCH_HAS_DIRECTED_IPIS
select THREAD_STACK_INFO
help
Xtensa architecture
@ -215,7 +209,7 @@ config SRAM_BASE_ADDRESS
hex "SRAM Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_SRAM))
help
The SRAM base address. The default value comes from
The SRAM base address. The default value comes from from
/chosen/zephyr,sram in devicetree. The user should generally avoid
changing it via menuconfig or in configuration files.
@ -227,7 +221,6 @@ DT_CHOSEN_Z_FLASH := zephyr,flash
config FLASH_SIZE
int "Flash Size in kB"
default $(dt_chosen_reg_size_int,$(DT_CHOSEN_Z_FLASH),0,K) if (XIP && (ARM ||ARM64)) || !ARM
default 0 if !XIP
help
This option specifies the size of the flash in kB. It is normally set by
the board's defconfig file and the user should generally avoid modifying
@ -236,7 +229,6 @@ config FLASH_SIZE
config FLASH_BASE_ADDRESS
hex "Flash Base Address"
default $(dt_chosen_reg_addr_hex,$(DT_CHOSEN_Z_FLASH)) if (XIP && (ARM || ARM64)) || !ARM
default 0 if !XIP
help
This option specifies the base address of the flash on the board. It is
normally set by the board's defconfig file and the user should generally
@ -406,21 +398,6 @@ config NOCACHE_MEMORY
transfers when cache coherence issues are not optimal or can not
be solved using cache maintenance operations.
config FRAME_POINTER
bool "Compile the kernel with frame pointers"
select OVERRIDE_FRAME_POINTER_DEFAULT
help
Select Y here to gain precise stack traces at the expense of slightly
increased size and decreased speed.
config ARCH_STACKWALK_MAX_FRAMES
int "Max depth for stack walk function"
default 8
depends on ARCH_HAS_STACKWALK
help
Depending on implementation, this can place a hard limit on the depths of the stack
for the stack walk function to examine.
menu "Interrupt Configuration"
config ISR_TABLES_LOCAL_DECLARATION_SUPPORTED
@ -666,11 +643,6 @@ config ARCH_HAS_EXTRA_EXCEPTION_INFO
config ARCH_HAS_GDBSTUB
bool
config ARCH_HAS_STACKWALK
bool
help
This is selected when the architecture implemented the arch_stack_walk() API.
config ARCH_HAS_COHERENCE
bool
help
@ -767,13 +739,6 @@ config ARCH_HAS_RESERVED_PAGE_FRAMES
memory mappings. The architecture will need to implement
arch_reserved_pages_update().
config ARCH_HAS_DIRECTED_IPIS
bool
help
This hidden configuration should be selected by the architecture if
it has an implementation for arch_sched_directed_ipi() which allows
for IPIs to be directed to specific CPUs.
config CPU_HAS_DCACHE
bool
help
@ -809,7 +774,7 @@ config ARCH_MAPS_ALL_RAM
virtual addresses elsewhere, this is limited to only management of the
virtual address space. The kernel's page frame ontology will not consider
this mapping at all; non-kernel pages will be considered free (unless marked
as reserved) and K_MEM_PAGE_FRAME_MAPPED will not be set.
as reserved) and Z_PAGE_FRAME_MAPPED will not be set.
config DCLS
bool "Processor is configured in DCLS mode"

View file

@ -18,7 +18,6 @@ config CPU_ARCEM
config CPU_ARCHS
bool
select ATOMIC_OPERATIONS_BUILTIN
select BARRIER_OPERATIONS_BUILTIN
help
This option signifies the use of an ARC HS CPU

View file

@ -23,7 +23,7 @@
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_EXCEPTION_DEBUG
static void dump_arc_esf(const struct arch_esf *esf)
static void dump_arc_esf(const z_arch_esf_t *esf)
{
ARC_EXCEPTION_DUMP(" r0: 0x%" PRIxPTR " r1: 0x%" PRIxPTR " r2: 0x%" PRIxPTR
" r3: 0x%" PRIxPTR "", esf->r0, esf->r1, esf->r2, esf->r3);
@ -42,7 +42,7 @@ static void dump_arc_esf(const struct arch_esf *esf)
}
#endif
void z_arc_fatal_error(unsigned int reason, const struct arch_esf *esf)
void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
#ifdef CONFIG_EXCEPTION_DEBUG
if (esf != NULL) {

View file

@ -346,7 +346,7 @@ static void dump_exception_info(uint32_t vector, uint32_t cause, uint32_t parame
* invokes the user provided routine k_sys_fatal_error_handler() which is
* responsible for implementing the error handling policy.
*/
void _Fault(struct arch_esf *esf, uint32_t old_sp)
void _Fault(z_arch_esf_t *esf, uint32_t old_sp)
{
uint32_t vector, cause, parameter;
uint32_t exc_addr = z_arc_v2_aux_reg_read(_ARC_V2_EFA);

View file

@ -26,7 +26,7 @@ GTEXT(_isr_wrapper)
GTEXT(_isr_demux)
#if defined(CONFIG_PM)
GTEXT(pm_system_resume)
GTEXT(z_pm_save_idle_exit)
#endif
/*
@ -253,7 +253,7 @@ rirq_path:
st 0, [r1, _kernel_offset_to_idle] /* zero idle duration */
PUSHR blink
jl pm_system_resume
jl z_pm_save_idle_exit
POPR blink
_skip_pm_save_idle_exit:

View file

@ -118,7 +118,7 @@ static inline bool _is_enabled_region(uint32_t r_index)
}
/**
* This internal function check if the given buffer is in the region
* This internal function check if the given buffer in in the region
*/
static inline bool _is_in_region(uint32_t r_index, uint32_t start, uint32_t size)
{

View file

@ -156,7 +156,7 @@ static inline bool _is_enabled_region(uint32_t r_index)
}
/**
* This internal function check if the given buffer is in the region
* This internal function check if the given buffer in in the region
*/
static inline bool _is_in_region(uint32_t r_index, uint32_t start, uint32_t size)
{

View file

@ -13,7 +13,6 @@
#include <zephyr/kernel.h>
#include <zephyr/kernel_structs.h>
#include <ksched.h>
#include <ipi.h>
#include <zephyr/init.h>
#include <zephyr/irq.h>
#include <arc_irq_offload.h>
@ -131,27 +130,21 @@ static void sched_ipi_handler(const void *unused)
z_sched_ipi();
}
void arch_sched_directed_ipi(uint32_t cpu_bitmap)
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
unsigned int i;
unsigned int num_cpus = arch_num_cpus();
uint32_t i;
/* Send sched_ipi request to other cores
/* broadcast sched_ipi request to other cores
* if the target is current core, hardware will ignore it
*/
unsigned int num_cpus = arch_num_cpus();
for (i = 0U; i < num_cpus; i++) {
if ((cpu_bitmap & BIT(i)) != 0) {
z_arc_connect_ici_generate(i);
}
z_arc_connect_ici_generate(i);
}
}
void arch_sched_broadcast_ipi(void)
{
arch_sched_directed_ipi(IPI_ALL_CPUS_MASK);
}
int arch_smp_init(void)
{
struct arc_connect_bcr bcr;
@ -195,4 +188,5 @@ int arch_smp_init(void)
return 0;
}
SYS_INIT(arch_smp_init, PRE_KERNEL_1, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

View file

@ -36,7 +36,7 @@ extern "C" {
#endif
#ifdef CONFIG_ARC_HAS_SECURE
struct arch_esf {
struct _irq_stack_frame {
#ifdef CONFIG_ARC_HAS_ZOL
uintptr_t lp_end;
uintptr_t lp_start;
@ -72,7 +72,7 @@ struct arch_esf {
uintptr_t status32;
};
#else
struct arch_esf {
struct _irq_stack_frame {
uintptr_t r0;
uintptr_t r1;
uintptr_t r2;
@ -108,7 +108,7 @@ struct arch_esf {
};
#endif
typedef struct arch_esf _isf_t;
typedef struct _irq_stack_frame _isf_t;

View file

@ -62,7 +62,9 @@ extern void z_arc_userspace_enter(k_thread_entry_t user_entry, void *p1,
void *p2, void *p3, uint32_t stack, uint32_t size,
struct k_thread *thread);
extern void z_arc_fatal_error(unsigned int reason, const struct arch_esf *esf);
extern void z_arc_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
extern void arch_sched_ipi(void);
extern void z_arc_switch(void *switch_to, void **switched_from);

View file

@ -1,9 +1,5 @@
# SPDX-License-Identifier: Apache-2.0
if(CONFIG_BIG_ENDIAN)
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-bigarm)
else()
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-littlearm)
endif()
set_property(GLOBAL PROPERTY PROPERTY_OUTPUT_FORMAT elf32-littlearm)
add_subdirectory(core)

View file

@ -35,7 +35,7 @@ config ARM_CUSTOM_INTERRUPT_CONTROLLER
assumes responsibility for handling the NVIC.
config ROMSTART_RELOCATION_ROM
bool "Relocate rom_start region"
bool
default n
help
Relocates the rom_start region containing the boot-vector data and
@ -66,7 +66,7 @@ config ROMSTART_RELOCATION_ROM
if ROMSTART_RELOCATION_ROM
config ROMSTART_REGION_ADDRESS
hex "Base address of the rom_start region"
hex
default 0x00000000
help
Start address of the rom_start region.
@ -85,7 +85,7 @@ if ROMSTART_RELOCATION_ROM
$(dt_nodelabel_reg_addr_hex,ocram_s_sys)
config ROMSTART_REGION_SIZE
hex "Size of the rom_start region"
hex
default 1
help
Size of the rom_start region in KB.

View file

@ -60,7 +60,7 @@ config CPU_AARCH32_CORTEX_A
select USE_SWITCH_SUPPORTED
# GDBSTUB has not yet been tested on Cortex M or R SoCs
select ARCH_HAS_GDBSTUB
# GDB on ARM needs the extra registers
# GDB on ARM needs the etxra registers
select EXTRA_EXCEPTION_INFO if GDBSTUB
help
This option signifies the use of a CPU of the Cortex-A family.

View file

@ -131,7 +131,6 @@ config AARCH32_ARMV8_R
bool
select ATOMIC_OPERATIONS_BUILTIN
select SCHED_IPI_SUPPORTED if SMP
select ARCH_HAS_DIRECTED_IPIS
help
This option signifies the use of an ARMv8-R AArch32 processor
implementation.

View file

@ -206,7 +206,7 @@ bool z_arm_fault_undef_instruction_fp(void)
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_undef_instruction(struct arch_esf *esf)
bool z_arm_fault_undef_instruction(z_arch_esf_t *esf)
{
#if defined(CONFIG_FPU_SHARING)
/*
@ -243,7 +243,7 @@ bool z_arm_fault_undef_instruction(struct arch_esf *esf)
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_prefetch(struct arch_esf *esf)
bool z_arm_fault_prefetch(z_arch_esf_t *esf)
{
uint32_t reason = K_ERR_CPU_EXCEPTION;
@ -299,7 +299,7 @@ static const struct z_exc_handle exceptions[] = {
*
* @return true if error is recoverable, otherwise return false.
*/
static bool memory_fault_recoverable(struct arch_esf *esf)
static bool memory_fault_recoverable(z_arch_esf_t *esf)
{
for (int i = 0; i < ARRAY_SIZE(exceptions); i++) {
/* Mask out instruction mode */
@ -321,7 +321,7 @@ static bool memory_fault_recoverable(struct arch_esf *esf)
*
* @return Returns true if the fault is fatal
*/
bool z_arm_fault_data(struct arch_esf *esf)
bool z_arm_fault_data(z_arch_esf_t *esf)
{
uint32_t reason = K_ERR_CPU_EXCEPTION;

View file

@ -71,7 +71,7 @@ void z_arm_irq_priority_set(unsigned int irq, unsigned int prio, uint32_t flags)
}
#endif /* !CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER */
void z_arm_fatal_error(unsigned int reason, const struct arch_esf *esf);
void z_arm_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
/**
*

View file

@ -156,7 +156,7 @@ _vfp_not_enabled:
* idle, this ensures that the calculation and programming of the
* device for the next timer deadline is not interrupted. For
* non-tickless idle, this ensures that the clearing of the kernel idle
* state is not interrupted. In each case, pm_system_resume
* state is not interrupted. In each case, z_pm_save_idle_exit
* is called with interrupts disabled.
*/
@ -170,7 +170,7 @@ _vfp_not_enabled:
movs r1, #0
/* clear kernel idle state */
str r1, [r2, #_kernel_offset_to_idle]
bl pm_system_resume
bl z_pm_save_idle_exit
_idle_state_cleared:
#endif /* CONFIG_PM */
@ -189,7 +189,7 @@ _idle_state_cleared:
*
* Note that interrupts are disabled up to this point on the ARM
* architecture variants other than the Cortex-M. It is also important
* to note that most interrupt controllers require that the nested
* to note that that most interrupt controllers require that the nested
* interrupts are handled after the active interrupt is acknowledged;
* this is be done through the `get_active` interrupt controller
* interface function.
@ -269,7 +269,7 @@ SECTION_FUNC(TEXT, _isr_wrapper)
* idle, this ensures that the calculation and programming of the
* device for the next timer deadline is not interrupted. For
* non-tickless idle, this ensures that the clearing of the kernel idle
* state is not interrupted. In each case, pm_system_resume
* state is not interrupted. In each case, z_pm_save_idle_exit
* is called with interrupts disabled.
*/
@ -283,7 +283,7 @@ SECTION_FUNC(TEXT, _isr_wrapper)
movs r1, #0
/* clear kernel idle state */
str r1, [r2, #_kernel_offset_to_idle]
bl pm_system_resume
bl z_pm_save_idle_exit
_idle_state_cleared:
#endif /* CONFIG_PM */

View file

@ -7,7 +7,6 @@
#include <zephyr/kernel.h>
#include <zephyr/arch/arm/cortex_a_r/lib_helpers.h>
#include <zephyr/drivers/interrupt_controller/gic.h>
#include <ipi.h>
#include "boot.h"
#include "zephyr/cache.h"
#include "zephyr/kernel/thread_stack.h"
@ -211,7 +210,7 @@ void arch_secondary_cpu_init(void)
#ifdef CONFIG_SMP
static void send_ipi(unsigned int ipi, uint32_t cpu_bitmap)
static void broadcast_ipi(unsigned int ipi)
{
uint32_t mpidr = MPIDR_TO_CORE(GET_MPIDR());
@ -221,10 +220,6 @@ static void send_ipi(unsigned int ipi, uint32_t cpu_bitmap)
unsigned int num_cpus = arch_num_cpus();
for (int i = 0; i < num_cpus; i++) {
if ((cpu_bitmap & BIT(i)) == 0) {
continue;
}
uint32_t target_mpidr = cpu_map[i];
uint8_t aff0;
@ -244,14 +239,10 @@ void sched_ipi_handler(const void *unused)
z_sched_ipi();
}
void arch_sched_broadcast_ipi(void)
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
send_ipi(SGI_SCHED_IPI, IPI_ALL_CPUS_MASK);
}
void arch_sched_directed_ipi(uint32_t cpu_bitmap)
{
send_ipi(SGI_SCHED_IPI, cpu_bitmap);
broadcast_ipi(SGI_SCHED_IPI);
}
int arch_smp_init(void)
@ -268,4 +259,6 @@ int arch_smp_init(void)
return 0;
}
SYS_INIT(arch_smp_init, PRE_KERNEL_2, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

View file

@ -95,10 +95,6 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
iframe->a4 = (uint32_t)p3;
iframe->xpsr = A_BIT | MODE_SYS;
#if defined(CONFIG_BIG_ENDIAN)
iframe->xpsr |= E_BIT;
#endif /* CONFIG_BIG_ENDIAN */
#if defined(CONFIG_COMPILER_ISA_THUMB2)
iframe->xpsr |= T_BIT;
#endif /* CONFIG_COMPILER_ISA_THUMB2 */

View file

@ -73,17 +73,6 @@ config CPU_CORTEX_M55
help
This option signifies the use of a Cortex-M55 CPU
config CPU_CORTEX_M85
bool
select CPU_CORTEX_M
select ARMV8_1_M_MAINLINE
select ARMV8_M_SE if CPU_HAS_TEE
select ARMV7_M_ARMV8_M_FP if CPU_HAS_FPU
select CPU_HAS_DCACHE
select CPU_HAS_ICACHE
help
This option signifies the use of a Cortex-M85 CPU
config CPU_CORTEX_M7
bool
select CPU_CORTEX_M

View file

@ -41,7 +41,7 @@ struct arm_arch_block {
*/
static struct arm_arch_block arch_blk;
void arch_coredump_info_dump(const struct arch_esf *esf)
void arch_coredump_info_dump(const z_arch_esf_t *esf)
{
struct coredump_arch_hdr_t hdr = {
.id = COREDUMP_ARCH_HDR_ID,

View file

@ -146,7 +146,7 @@ LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
*/
#if (CONFIG_FAULT_DUMP == 1)
static void fault_show(const struct arch_esf *esf, int fault)
static void fault_show(const z_arch_esf_t *esf, int fault)
{
PR_EXC("Fault! EXC #%d", fault);
@ -165,7 +165,7 @@ static void fault_show(const struct arch_esf *esf, int fault)
*
* For Dump level 0, no information needs to be generated.
*/
static void fault_show(const struct arch_esf *esf, int fault)
static void fault_show(const z_arch_esf_t *esf, int fault)
{
(void)esf;
(void)fault;
@ -185,7 +185,7 @@ static const struct z_exc_handle exceptions[] = {
*
* @return true if error is recoverable, otherwise return false.
*/
static bool memory_fault_recoverable(struct arch_esf *esf, bool synchronous)
static bool memory_fault_recoverable(z_arch_esf_t *esf, bool synchronous)
{
#ifdef CONFIG_USERSPACE
for (int i = 0; i < ARRAY_SIZE(exceptions); i++) {
@ -228,7 +228,7 @@ uint32_t z_check_thread_stack_fail(const uint32_t fault_addr,
*
* @return error code to identify the fatal error reason
*/
static uint32_t mem_manage_fault(struct arch_esf *esf, int from_hard_fault,
static uint32_t mem_manage_fault(z_arch_esf_t *esf, int from_hard_fault,
bool *recoverable)
{
uint32_t reason = K_ERR_ARM_MEM_GENERIC;
@ -387,7 +387,7 @@ static uint32_t mem_manage_fault(struct arch_esf *esf, int from_hard_fault,
* @return error code to identify the fatal error reason.
*
*/
static int bus_fault(struct arch_esf *esf, int from_hard_fault, bool *recoverable)
static int bus_fault(z_arch_esf_t *esf, int from_hard_fault, bool *recoverable)
{
uint32_t reason = K_ERR_ARM_BUS_GENERIC;
@ -549,7 +549,7 @@ static int bus_fault(struct arch_esf *esf, int from_hard_fault, bool *recoverabl
*
* @return error code to identify the fatal error reason
*/
static uint32_t usage_fault(const struct arch_esf *esf)
static uint32_t usage_fault(const z_arch_esf_t *esf)
{
uint32_t reason = K_ERR_ARM_USAGE_GENERIC;
@ -612,7 +612,7 @@ static uint32_t usage_fault(const struct arch_esf *esf)
*
* @return error code to identify the fatal error reason
*/
static uint32_t secure_fault(const struct arch_esf *esf)
static uint32_t secure_fault(const z_arch_esf_t *esf)
{
uint32_t reason = K_ERR_ARM_SECURE_GENERIC;
@ -661,7 +661,7 @@ static uint32_t secure_fault(const struct arch_esf *esf)
* See z_arm_fault_dump() for example.
*
*/
static void debug_monitor(struct arch_esf *esf, bool *recoverable)
static void debug_monitor(z_arch_esf_t *esf, bool *recoverable)
{
*recoverable = false;
@ -687,7 +687,7 @@ static void debug_monitor(struct arch_esf *esf, bool *recoverable)
#error Unknown ARM architecture
#endif /* CONFIG_ARMV6_M_ARMV8_M_BASELINE */
static inline bool z_arm_is_synchronous_svc(struct arch_esf *esf)
static inline bool z_arm_is_synchronous_svc(z_arch_esf_t *esf)
{
uint16_t *ret_addr = (uint16_t *)esf->basic.pc;
/* SVC is a 16-bit instruction. On a synchronous SVC
@ -762,7 +762,7 @@ static inline bool z_arm_is_pc_valid(uintptr_t pc)
*
* @return error code to identify the fatal error reason
*/
static uint32_t hard_fault(struct arch_esf *esf, bool *recoverable)
static uint32_t hard_fault(z_arch_esf_t *esf, bool *recoverable)
{
uint32_t reason = K_ERR_CPU_EXCEPTION;
@ -829,7 +829,7 @@ static uint32_t hard_fault(struct arch_esf *esf, bool *recoverable)
* See z_arm_fault_dump() for example.
*
*/
static void reserved_exception(const struct arch_esf *esf, int fault)
static void reserved_exception(const z_arch_esf_t *esf, int fault)
{
ARG_UNUSED(esf);
@ -839,7 +839,7 @@ static void reserved_exception(const struct arch_esf *esf, int fault)
}
/* Handler function for ARM fault conditions. */
static uint32_t fault_handle(struct arch_esf *esf, int fault, bool *recoverable)
static uint32_t fault_handle(z_arch_esf_t *esf, int fault, bool *recoverable)
{
uint32_t reason = K_ERR_CPU_EXCEPTION;
@ -893,7 +893,7 @@ static uint32_t fault_handle(struct arch_esf *esf, int fault, bool *recoverable)
*
* @param secure_esf Pointer to the secure stack frame.
*/
static void secure_stack_dump(const struct arch_esf *secure_esf)
static void secure_stack_dump(const z_arch_esf_t *secure_esf)
{
/*
* In case a Non-Secure exception interrupted the Secure
@ -918,7 +918,7 @@ static void secure_stack_dump(const struct arch_esf *secure_esf)
* Non-Secure exception entry.
*/
top_of_sec_stack += ADDITIONAL_STATE_CONTEXT_WORDS;
secure_esf = (const struct arch_esf *)top_of_sec_stack;
secure_esf = (const z_arch_esf_t *)top_of_sec_stack;
sec_ret_addr = secure_esf->basic.pc;
} else {
/* Exception during Non-Secure function call.
@ -947,11 +947,11 @@ static void secure_stack_dump(const struct arch_esf *secure_esf)
*
* @return ESF pointer on success, otherwise return NULL
*/
static inline struct arch_esf *get_esf(uint32_t msp, uint32_t psp, uint32_t exc_return,
static inline z_arch_esf_t *get_esf(uint32_t msp, uint32_t psp, uint32_t exc_return,
bool *nested_exc)
{
bool alternative_state_exc = false;
struct arch_esf *ptr_esf = NULL;
z_arch_esf_t *ptr_esf = NULL;
*nested_exc = false;
@ -979,14 +979,14 @@ static inline struct arch_esf *get_esf(uint32_t msp, uint32_t psp, uint32_t exc_
alternative_state_exc = true;
/* Dump the Secure stack before handling the actual fault. */
struct arch_esf *secure_esf;
z_arch_esf_t *secure_esf;
if (exc_return & EXC_RETURN_SPSEL_PROCESS) {
/* Secure stack pointed by PSP */
secure_esf = (struct arch_esf *)psp;
secure_esf = (z_arch_esf_t *)psp;
} else {
/* Secure stack pointed by MSP */
secure_esf = (struct arch_esf *)msp;
secure_esf = (z_arch_esf_t *)msp;
*nested_exc = true;
}
@ -997,9 +997,9 @@ static inline struct arch_esf *get_esf(uint32_t msp, uint32_t psp, uint32_t exc_
* and supply it to the fault handing function.
*/
if (exc_return & EXC_RETURN_MODE_THREAD) {
ptr_esf = (struct arch_esf *)__TZ_get_PSP_NS();
ptr_esf = (z_arch_esf_t *)__TZ_get_PSP_NS();
} else {
ptr_esf = (struct arch_esf *)__TZ_get_MSP_NS();
ptr_esf = (z_arch_esf_t *)__TZ_get_MSP_NS();
}
}
#elif defined(CONFIG_ARM_NONSECURE_FIRMWARE)
@ -1024,10 +1024,10 @@ static inline struct arch_esf *get_esf(uint32_t msp, uint32_t psp, uint32_t exc_
if (exc_return & EXC_RETURN_SPSEL_PROCESS) {
/* Non-Secure stack frame on PSP */
ptr_esf = (struct arch_esf *)psp;
ptr_esf = (z_arch_esf_t *)psp;
} else {
/* Non-Secure stack frame on MSP */
ptr_esf = (struct arch_esf *)msp;
ptr_esf = (z_arch_esf_t *)msp;
}
} else {
/* Exception entry occurred in Non-Secure stack. */
@ -1046,11 +1046,11 @@ static inline struct arch_esf *get_esf(uint32_t msp, uint32_t psp, uint32_t exc_
if (!alternative_state_exc) {
if (exc_return & EXC_RETURN_MODE_THREAD) {
/* Returning to thread mode */
ptr_esf = (struct arch_esf *)psp;
ptr_esf = (z_arch_esf_t *)psp;
} else {
/* Returning to handler mode */
ptr_esf = (struct arch_esf *)msp;
ptr_esf = (z_arch_esf_t *)msp;
*nested_exc = true;
}
}
@ -1095,12 +1095,12 @@ void z_arm_fault(uint32_t msp, uint32_t psp, uint32_t exc_return,
uint32_t reason = K_ERR_CPU_EXCEPTION;
int fault = SCB->ICSR & SCB_ICSR_VECTACTIVE_Msk;
bool recoverable, nested_exc;
struct arch_esf *esf;
z_arch_esf_t *esf;
/* Create a stack-ed copy of the ESF to be used during
* the fault handling process.
*/
struct arch_esf esf_copy;
z_arch_esf_t esf_copy;
/* Force unlock interrupts */
arch_irq_unlock(0);
@ -1123,13 +1123,13 @@ void z_arm_fault(uint32_t msp, uint32_t psp, uint32_t exc_return,
/* Copy ESF */
#if !defined(CONFIG_EXTRA_EXCEPTION_INFO)
memcpy(&esf_copy, esf, sizeof(struct arch_esf));
memcpy(&esf_copy, esf, sizeof(z_arch_esf_t));
ARG_UNUSED(callee_regs);
#else
/* the extra exception info is not present in the original esf
* so we only copy the fields before those.
*/
memcpy(&esf_copy, esf, offsetof(struct arch_esf, extra_info));
memcpy(&esf_copy, esf, offsetof(z_arch_esf_t, extra_info));
esf_copy.extra_info = (struct __extra_esf_info) {
.callee = callee_regs,
.exc_return = exc_return,

View file

@ -94,7 +94,7 @@ void z_arm_irq_priority_set(unsigned int irq, unsigned int prio, uint32_t flags)
#endif /* !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER) */
void z_arm_fatal_error(unsigned int reason, const struct arch_esf *esf);
void z_arm_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
/**
*
@ -122,7 +122,7 @@ void _arch_isr_direct_pm(void)
#elif defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* Lock all interrupts. irq_lock() will on this CPU only disable those
* lower than BASEPRI, which is not what we want. See comments in
* arch/arm/core/cortex_m/isr_wrapper.c
* arch/arm/core/isr_wrapper.S
*/
__asm__ volatile("cpsid i" : : : "memory");
#else

View file

@ -96,15 +96,11 @@ uintptr_t z_arm_pendsv_c(uintptr_t exc_ret)
/* restore mode */
IF_ENABLED(CONFIG_USERSPACE, ({
CONTROL_Type ctrl = {.w = __get_CONTROL()};
/* exit privileged state when returning to thread mode. */
ctrl.b.nPRIV = 0;
/* __set_CONTROL inserts an ISB which is may not be necessary here
* (stack pointer may not be touched), but it's recommended to avoid
* executing pre-fetched instructions with the previous privilege.
*/
__set_CONTROL(ctrl.w | current->arch.mode);
}));
CONTROL_Type ctrl = {.w = __get_CONTROL()};
/* exit privileged state when returing to thread mode. */
ctrl.b.nPRIV = 0;
__set_CONTROL(ctrl.w | current->arch.mode);
}));
return exc_ret;
}

View file

@ -588,7 +588,7 @@ void arch_switch_to_main_thread(struct k_thread *main_thread, char *stack_ptr,
"bx r4\n" /* We dont intend to return, so there is no need to link. */
: "+r" (_main)
: "r" (stack_ptr)
: "r0", "r1", "r2", "r3", "r4", "ip", "lr");
: "r0", "r1", "r2", "r3", "r4");
CODE_UNREACHABLE;
}
@ -659,7 +659,7 @@ FUNC_NORETURN void z_arm_switch_to_main_no_multithreading(
#ifdef CONFIG_BUILTIN_STACK_GUARD
, [_psplim]"r" (psplim)
#endif
: "r0", "r1", "r2", "ip", "lr"
: "r0", "r1", "r2", "r3"
);
CODE_UNREACHABLE; /* LCOV_EXCL_LINE */

View file

@ -18,7 +18,7 @@
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
#ifdef CONFIG_EXCEPTION_DEBUG
static void esf_dump(const struct arch_esf *esf)
static void esf_dump(const z_arch_esf_t *esf)
{
LOG_ERR("r0/a1: 0x%08x r1/a2: 0x%08x r2/a3: 0x%08x",
esf->basic.a1, esf->basic.a2, esf->basic.a3);
@ -66,7 +66,7 @@ static void esf_dump(const struct arch_esf *esf)
}
#endif /* CONFIG_EXCEPTION_DEBUG */
void z_arm_fatal_error(unsigned int reason, const struct arch_esf *esf)
void z_arm_fatal_error(unsigned int reason, const z_arch_esf_t *esf)
{
#ifdef CONFIG_EXCEPTION_DEBUG
if (esf != NULL) {
@ -102,7 +102,7 @@ void z_arm_fatal_error(unsigned int reason, const struct arch_esf *esf)
* @param esf exception frame
* @param callee_regs Callee-saved registers (R4-R11)
*/
void z_do_kernel_oops(const struct arch_esf *esf, _callee_saved_t *callee_regs)
void z_do_kernel_oops(const z_arch_esf_t *esf, _callee_saved_t *callee_regs)
{
#if !(defined(CONFIG_EXTRA_EXCEPTION_INFO) && defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE))
ARG_UNUSED(callee_regs);
@ -130,9 +130,9 @@ void z_do_kernel_oops(const struct arch_esf *esf, _callee_saved_t *callee_regs)
#if !defined(CONFIG_EXTRA_EXCEPTION_INFO)
z_arm_fatal_error(reason, esf);
#else
struct arch_esf esf_copy;
z_arch_esf_t esf_copy;
memcpy(&esf_copy, esf, offsetof(struct arch_esf, extra_info));
memcpy(&esf_copy, esf, offsetof(z_arch_esf_t, extra_info));
#if defined(CONFIG_ARMV7_M_ARMV8_M_MAINLINE)
/* extra exception info is collected in callee_reg param
* on CONFIG_ARMV7_M_ARMV8_M_MAINLINE
@ -156,7 +156,7 @@ void z_do_kernel_oops(const struct arch_esf *esf, _callee_saved_t *callee_regs)
FUNC_NORETURN void arch_syscall_oops(void *ssf_ptr)
{
uint32_t *ssf_contents = ssf_ptr;
struct arch_esf oops_esf = { 0 };
z_arch_esf_t oops_esf = { 0 };
/* TODO: Copy the rest of the register set out of ssf_ptr */
oops_esf.basic.pc = ssf_contents[3];

View file

@ -42,7 +42,7 @@ static int is_bkpt(unsigned int exc_cause)
}
/* Wrapper function to save and restore execution c */
void z_gdb_entry(struct arch_esf *esf, unsigned int exc_cause)
void z_gdb_entry(z_arch_esf_t *esf, unsigned int exc_cause)
{
/* Disable the hardware breakpoint in case it was set */
__asm__ volatile("mcr p14, 0, %0, c0, c0, 5" ::"r"(0x0) :);

View file

@ -54,7 +54,6 @@ static uint8_t static_regions_num;
#elif defined(CONFIG_CPU_CORTEX_M23) || \
defined(CONFIG_CPU_CORTEX_M33) || \
defined(CONFIG_CPU_CORTEX_M55) || \
defined(CONFIG_CPU_CORTEX_M85) || \
defined(CONFIG_AARCH32_ARMV8_R)
#include "arm_mpu_v8_internal.h"
#else
@ -131,10 +130,12 @@ static int mpu_configure_regions_from_dt(uint8_t *reg_index)
break;
#endif
default:
/* Attribute other than ARM-specific is set.
* This region should not be configured in MPU.
/* Either the specified `ATTR_MPU_*` attribute does not
* exists or the `REGION_*_ATTR` macro is not defined
* for that attribute.
*/
continue;
LOG_ERR("Invalid attribute for the region\n");
return -EINVAL;
}
#if defined(CONFIG_ARMV7_R)
region_conf.size = size_to_mpu_rasr_size(region[idx].dt_size);

View file

@ -31,7 +31,7 @@ struct dynamic_region_info {
*/
static struct dynamic_region_info dyn_reg_info[MPU_DYNAMIC_REGION_AREAS_NUM];
#if defined(CONFIG_CPU_CORTEX_M23) || defined(CONFIG_CPU_CORTEX_M33) || \
defined(CONFIG_CPU_CORTEX_M55) || defined(CONFIG_CPU_CORTEX_M85)
defined(CONFIG_CPU_CORTEX_M55)
static inline void mpu_set_mair0(uint32_t mair0)
{
MPU->MAIR0 = mair0;

View file

@ -38,7 +38,7 @@ static ALWAYS_INLINE bool arch_is_in_isr(void)
return (arch_curr_cpu()->nested != 0U);
}
static ALWAYS_INLINE bool arch_is_in_nested_exception(const struct arch_esf *esf)
static ALWAYS_INLINE bool arch_is_in_nested_exception(const z_arch_esf_t *esf)
{
return (arch_curr_cpu()->arch.exc_depth > 1U) ? (true) : (false);
}
@ -48,7 +48,7 @@ static ALWAYS_INLINE bool arch_is_in_nested_exception(const struct arch_esf *esf
* This function is used by privileged code to determine if the thread
* associated with the stack frame is in user mode.
*/
static ALWAYS_INLINE bool z_arm_preempted_thread_in_user_mode(const struct arch_esf *esf)
static ALWAYS_INLINE bool z_arm_preempted_thread_in_user_mode(const z_arch_esf_t *esf)
{
return ((esf->basic.xpsr & CPSR_M_Msk) == CPSR_M_USR);
}

View file

@ -59,7 +59,7 @@ extern FUNC_NORETURN void z_arm_userspace_enter(k_thread_entry_t user_entry,
uint32_t stack_end,
uint32_t stack_start);
extern void z_arm_fatal_error(unsigned int reason, const struct arch_esf *esf);
extern void z_arm_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
#endif /* _ASMLANGUAGE */

View file

@ -68,7 +68,7 @@ static ALWAYS_INLINE bool arch_is_in_isr(void)
* @return true if execution state was in handler mode, before
* the current exception occurred, otherwise false.
*/
static ALWAYS_INLINE bool arch_is_in_nested_exception(const struct arch_esf *esf)
static ALWAYS_INLINE bool arch_is_in_nested_exception(const z_arch_esf_t *esf)
{
return (esf->basic.xpsr & IPSR_ISR_Msk) ? (true) : (false);
}
@ -80,7 +80,7 @@ static ALWAYS_INLINE bool arch_is_in_nested_exception(const struct arch_esf *esf
* @param esf the exception stack frame (unused)
* @return true if the current thread was in unprivileged mode
*/
static ALWAYS_INLINE bool z_arm_preempted_thread_in_user_mode(const struct arch_esf *esf)
static ALWAYS_INLINE bool z_arm_preempted_thread_in_user_mode(const z_arch_esf_t *esf)
{
return z_arm_thread_is_in_user_mode();
}

View file

@ -76,7 +76,7 @@ extern FUNC_NORETURN void z_arm_userspace_enter(k_thread_entry_t user_entry,
uint32_t stack_end,
uint32_t stack_start);
extern void z_arm_fatal_error(unsigned int reason, const struct arch_esf *esf);
extern void z_arm_fatal_error(unsigned int reason, const z_arch_esf_t *esf);
#endif /* _ASMLANGUAGE */

View file

@ -42,7 +42,7 @@
extern "C" {
#endif
typedef struct arch_esf _esf_t;
typedef struct __esf _esf_t;
typedef struct __basic_sf _basic_sf_t;
#if defined(CONFIG_FPU) && defined(CONFIG_FPU_SHARING)
typedef struct __fpu_sf _fpu_sf_t;

View file

@ -4,7 +4,6 @@ zephyr_library()
zephyr_library_sources(
cpu_idle.S
early_mem_funcs.S
fatal.c
irq_init.c
irq_manage.c
@ -44,7 +43,7 @@ if ((CONFIG_MP_MAX_NUM_CPUS GREATER 1) OR (CONFIG_SMP))
endif ()
zephyr_cc_option_ifdef(CONFIG_USERSPACE -mno-outline-atomics)
zephyr_cc_option_ifdef(CONFIG_FRAME_POINTER -mno-omit-leaf-frame-pointer)
zephyr_cc_option_ifdef(CONFIG_ARM64_ENABLE_FRAME_POINTER -mno-omit-leaf-frame-pointer)
# GCC may generate ldp/stp instructions with the Advanced SIMD Qn registers for
# consecutive 32-byte loads and stores. Saving and restoring the Advanced SIMD

View file

@ -145,22 +145,13 @@ config ARM64_SAFE_EXCEPTION_STACK
config ARM64_ENABLE_FRAME_POINTER
bool
default y
depends on OVERRIDE_FRAME_POINTER_DEFAULT && !OMIT_FRAME_POINTER
depends on !FRAME_POINTER
select DEPRECATED
help
Deprecated. Use CONFIG_FRAME_POINTER instead.
Hidden option to simplify access to OVERRIDE_FRAME_POINTER_DEFAULT
and OMIT_FRAME_POINTER. It is automatically enabled when the frame
pointer unwinding is enabled.
config ARM64_EXCEPTION_STACK_TRACE
bool
default y
depends on FRAME_POINTER
help
Internal config to enable runtime stack traces on fatal exceptions.
config ARM64_SAFE_EXCEPTION_STACK_SIZE
int "The stack size of the safe exception stack"
default 4096

View file

@ -13,7 +13,7 @@
#define ARCH_HDR_VER 1
/* Structure to store the architecture registers passed arch_coredump_info_dump
* As callee saved registers are not provided in struct arch_esf structure in Zephyr
* As callee saved registers are not provided in z_arch_esf_t structure in Zephyr
* we just need 22 registers.
*/
struct arm64_arch_block {
@ -50,7 +50,7 @@ struct arm64_arch_block {
*/
static struct arm64_arch_block arch_blk;
void arch_coredump_info_dump(const struct arch_esf *esf)
void arch_coredump_info_dump(const z_arch_esf_t *esf)
{
/* Target architecture information header */
/* Information just relevant to the python parser */
@ -69,7 +69,7 @@ void arch_coredump_info_dump(const struct arch_esf *esf)
/*
* Copies the thread registers to a memory block that will be printed out
* The thread registers are already provided by structure struct arch_esf
* The thread registers are already provided by structure z_arch_esf_t
*/
arch_blk.r.x0 = esf->x0;
arch_blk.r.x1 = esf->x1;

View file

@ -1,83 +0,0 @@
/*
* Copyright (c) BayLibre SAS
*
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/toolchain.h>
#include <zephyr/linker/sections.h>
_ASM_FILE_PROLOGUE
/*
* These simple memset and memcpy alternatives are necessary as the optimized
* ones depend on the MMU to be active (see commit c5b898743a20).
*
* Furthermore, we can't implement those in C as the compiler is just too
* smart for its own good and replaces our simple loops into direct calls
* to memset or memcpy on its own.
*/
/* void z_early_memset(void *dst, int c, size_t n) */
GTEXT(z_early_memset)
SECTION_FUNC(TEXT, z_early_memset)
/* is dst pointer 8-bytes aligned? */
tst x0, #0x7
b.ne 2f
/* at least 8 bytes to set? */
cmp x2, #8
b.lo 2f
/* spread the byte value across whole 64 bits */
and x8, x1, #0xff
mov x9, #0x0101010101010101
mul x8, x8, x9
1: /* 8 bytes at a time */
sub x2, x2, #8
cmp x2, #7
str x8, [x0], #8
b.hi 1b
2: /* at least one byte to set? */
cbz x2, 4f
3: /* one byte at a time */
subs x2, x2, #1
strb w8, [x0], #1
b.ne 3b
4: ret
/* void z_early_memcpy(void *dst, const void *src, size_t n) */
GTEXT(z_early_memcpy)
SECTION_FUNC(TEXT, z_early_memcpy)
/* are dst and src pointers 8-bytes aligned? */
orr x8, x1, x0
tst x8, #0x7
b.ne 2f
/* at least 8 bytes to copy? */
cmp x2, #8
b.lo 2f
1: /* 8 bytes at a time */
ldr x8, [x1], #8
sub x2, x2, #8
cmp x2, #7
str x8, [x0], #8
b.hi 1b
2: /* at least one byte to copy? */
cbz x2, 4f
3: /* one byte at a time */
ldrb w8, [x1], #1
subs x2, x2, #1
strb w8, [x0], #1
b.ne 3b
4: ret

View file

@ -181,7 +181,7 @@ static void dump_esr(uint64_t esr, bool *dump_far)
LOG_ERR(" ISS: 0x%llx", GET_ESR_ISS(esr));
}
static void esf_dump(const struct arch_esf *esf)
static void esf_dump(const z_arch_esf_t *esf)
{
LOG_ERR("x0: 0x%016llx x1: 0x%016llx", esf->x0, esf->x1);
LOG_ERR("x2: 0x%016llx x3: 0x%016llx", esf->x2, esf->x3);
@ -195,8 +195,8 @@ static void esf_dump(const struct arch_esf *esf)
LOG_ERR("x18: 0x%016llx lr: 0x%016llx", esf->x18, esf->lr);
}
#ifdef CONFIG_EXCEPTION_STACK_TRACE
static void esf_unwind(const struct arch_esf *esf)
#ifdef CONFIG_ARM64_ENABLE_FRAME_POINTER
static void esf_unwind(const z_arch_esf_t *esf)
{
/*
* For GCC:
@ -223,7 +223,7 @@ static void esf_unwind(const struct arch_esf *esf)
uint64_t lr;
LOG_ERR("");
for (int i = 0; (fp != NULL) && (i < CONFIG_EXCEPTION_STACK_TRACE_MAX_FRAMES); i++) {
while (fp != NULL) {
lr = fp[1];
#ifdef CONFIG_SYMTAB
uint32_t offset = 0;
@ -244,7 +244,7 @@ static void esf_unwind(const struct arch_esf *esf)
#endif /* CONFIG_EXCEPTION_DEBUG */
#ifdef CONFIG_ARM64_STACK_PROTECTION
static bool z_arm64_stack_corruption_check(struct arch_esf *esf, uint64_t esr, uint64_t far)
static bool z_arm64_stack_corruption_check(z_arch_esf_t *esf, uint64_t esr, uint64_t far)
{
uint64_t sp, sp_limit, guard_start;
/* 0x25 means data abort from current EL */
@ -284,7 +284,7 @@ static bool z_arm64_stack_corruption_check(struct arch_esf *esf, uint64_t esr, u
}
#endif
static bool is_recoverable(struct arch_esf *esf, uint64_t esr, uint64_t far,
static bool is_recoverable(z_arch_esf_t *esf, uint64_t esr, uint64_t far,
uint64_t elr)
{
if (!esf)
@ -306,7 +306,7 @@ static bool is_recoverable(struct arch_esf *esf, uint64_t esr, uint64_t far,
return false;
}
void z_arm64_fatal_error(unsigned int reason, struct arch_esf *esf)
void z_arm64_fatal_error(unsigned int reason, z_arch_esf_t *esf)
{
uint64_t esr = 0;
uint64_t elr = 0;
@ -363,9 +363,9 @@ void z_arm64_fatal_error(unsigned int reason, struct arch_esf *esf)
esf_dump(esf);
}
#ifdef CONFIG_EXCEPTION_STACK_TRACE
#ifdef CONFIG_ARM64_ENABLE_FRAME_POINTER
esf_unwind(esf);
#endif /* CONFIG_EXCEPTION_STACK_TRACE */
#endif /* CONFIG_ARM64_ENABLE_FRAME_POINTER */
#endif /* CONFIG_EXCEPTION_DEBUG */
z_fatal_error(reason, esf);
@ -379,7 +379,7 @@ void z_arm64_fatal_error(unsigned int reason, struct arch_esf *esf)
*
* @param esf exception frame
*/
void z_arm64_do_kernel_oops(struct arch_esf *esf)
void z_arm64_do_kernel_oops(z_arch_esf_t *esf)
{
/* x8 holds the exception reason */
unsigned int reason = esf->x8;

View file

@ -159,7 +159,7 @@ void z_arm64_fpu_enter_exc(void)
* simulate them and leave the FPU access disabled. This also avoids the
* need for disabling interrupts in syscalls and IRQ handlers as well.
*/
static bool simulate_str_q_insn(struct arch_esf *esf)
static bool simulate_str_q_insn(z_arch_esf_t *esf)
{
/*
* Support only the "FP in exception" cases for now.
@ -221,7 +221,7 @@ static bool simulate_str_q_insn(struct arch_esf *esf)
* don't get interrupted that is. To ensure that we mask interrupts to
* the triggering exception context.
*/
void z_arm64_fpu_trap(struct arch_esf *esf)
void z_arm64_fpu_trap(z_arch_esf_t *esf)
{
__ASSERT(read_daif() & DAIF_IRQ_BIT, "must be called with IRQs disabled");

View file

@ -18,7 +18,7 @@
#include <zephyr/sw_isr_table.h>
#include <zephyr/drivers/interrupt_controller/gic.h>
void z_arm64_fatal_error(unsigned int reason, struct arch_esf *esf);
void z_arm64_fatal_error(unsigned int reason, z_arch_esf_t *esf);
#if !defined(CONFIG_ARM_CUSTOM_INTERRUPT_CONTROLLER)
/*

View file

@ -28,13 +28,9 @@ LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
static uint64_t xlat_tables[CONFIG_MAX_XLAT_TABLES * Ln_XLAT_NUM_ENTRIES]
__aligned(Ln_XLAT_NUM_ENTRIES * sizeof(uint64_t));
static int xlat_use_count[CONFIG_MAX_XLAT_TABLES];
static uint16_t xlat_use_count[CONFIG_MAX_XLAT_TABLES];
static struct k_spinlock xlat_lock;
/* Usage count value range */
#define XLAT_PTE_COUNT_MASK GENMASK(15, 0)
#define XLAT_REF_COUNT_UNIT BIT(16)
/* Returns a reference to a free table */
static uint64_t *new_table(void)
{
@ -43,9 +39,9 @@ static uint64_t *new_table(void)
/* Look for a free table. */
for (i = 0U; i < CONFIG_MAX_XLAT_TABLES; i++) {
if (xlat_use_count[i] == 0) {
if (xlat_use_count[i] == 0U) {
table = &xlat_tables[i * Ln_XLAT_NUM_ENTRIES];
xlat_use_count[i] = XLAT_REF_COUNT_UNIT;
xlat_use_count[i] = 1U;
MMU_DEBUG("allocating table [%d]%p\n", i, table);
return table;
}
@ -63,80 +59,31 @@ static inline unsigned int table_index(uint64_t *pte)
return i;
}
/* Makes a table free for reuse. */
static void free_table(uint64_t *table)
{
unsigned int i = table_index(table);
MMU_DEBUG("freeing table [%d]%p\n", i, table);
__ASSERT(xlat_use_count[i] == 1U, "table still in use");
xlat_use_count[i] = 0U;
}
/* Adjusts usage count and returns current count. */
static int table_usage(uint64_t *table, int adjustment)
{
unsigned int i = table_index(table);
int prev_count = xlat_use_count[i];
int new_count = prev_count + adjustment;
/* be reasonable not to always create a debug flood */
if ((IS_ENABLED(DUMP_PTE) && adjustment != 0) || new_count == 0) {
MMU_DEBUG("table [%d]%p: usage %#x -> %#x\n", i, table, prev_count, new_count);
}
__ASSERT(new_count >= 0,
"table use count underflow");
__ASSERT(new_count == 0 || new_count >= XLAT_REF_COUNT_UNIT,
"table in use with no reference to it");
__ASSERT((new_count & XLAT_PTE_COUNT_MASK) <= Ln_XLAT_NUM_ENTRIES,
"table PTE count overflow");
xlat_use_count[i] = new_count;
return new_count;
}
static inline void inc_table_ref(uint64_t *table)
{
table_usage(table, XLAT_REF_COUNT_UNIT);
}
static inline void dec_table_ref(uint64_t *table)
{
int ref_unit = XLAT_REF_COUNT_UNIT;
table_usage(table, -ref_unit);
xlat_use_count[i] += adjustment;
__ASSERT(xlat_use_count[i] > 0, "usage count underflow");
return xlat_use_count[i];
}
static inline bool is_table_unused(uint64_t *table)
{
return (table_usage(table, 0) & XLAT_PTE_COUNT_MASK) == 0;
return table_usage(table, 0) == 1;
}
static inline bool is_table_single_referenced(uint64_t *table)
{
return table_usage(table, 0) < (2 * XLAT_REF_COUNT_UNIT);
}
#ifdef CONFIG_TEST
/* Hooks to let test code peek at table states */
int arm64_mmu_nb_free_tables(void)
{
int count = 0;
for (int i = 0; i < CONFIG_MAX_XLAT_TABLES; i++) {
if (xlat_use_count[i] == 0) {
count++;
}
}
return count;
}
int arm64_mmu_tables_total_usage(void)
{
int count = 0;
for (int i = 0; i < CONFIG_MAX_XLAT_TABLES; i++) {
count += xlat_use_count[i];
}
return count;
}
#endif /* CONFIG_TEST */
static inline bool is_free_desc(uint64_t desc)
{
return (desc & PTE_DESC_TYPE_MASK) == PTE_INVALID_DESC;
@ -155,15 +102,15 @@ static inline bool is_block_desc(uint64_t desc)
static inline uint64_t *pte_desc_table(uint64_t desc)
{
uint64_t address = desc & PTE_PHYSADDR_MASK;
uint64_t address = desc & GENMASK(47, PAGE_SIZE_SHIFT);
/* tables use a 1:1 physical:virtual mapping */
return (uint64_t *)address;
}
static inline bool is_desc_block_aligned(uint64_t desc, unsigned int level_size)
{
bool aligned = (desc & PTE_PHYSADDR_MASK & (level_size - 1)) == 0;
uint64_t mask = GENMASK(47, PAGE_SIZE_SHIFT);
bool aligned = !((desc & mask) & (level_size - 1));
if (!aligned) {
MMU_DEBUG("misaligned desc 0x%016llx for block size 0x%x\n",
@ -176,7 +123,7 @@ static inline bool is_desc_block_aligned(uint64_t desc, unsigned int level_size)
static inline bool is_desc_superset(uint64_t desc1, uint64_t desc2,
unsigned int level)
{
uint64_t mask = DESC_ATTRS_MASK | GENMASK64(47, LEVEL_TO_VA_SIZE_SHIFT(level));
uint64_t mask = DESC_ATTRS_MASK | GENMASK(47, LEVEL_TO_VA_SIZE_SHIFT(level));
return (desc1 & mask) == (desc2 & mask);
}
@ -192,8 +139,6 @@ static void debug_show_pte(uint64_t *pte, unsigned int level)
return;
}
MMU_DEBUG("0x%016llx ", *pte);
if (is_table_desc(*pte, level)) {
uint64_t *table = pte_desc_table(*pte);
@ -280,17 +225,20 @@ static uint64_t *expand_to_table(uint64_t *pte, unsigned int level)
/* Link the new table in place of the pte it replaces */
set_pte_table_desc(pte, table, level);
table_usage(table, 1);
return table;
}
static int set_mapping(uint64_t *top_table, uintptr_t virt, size_t size,
static int set_mapping(struct arm_mmu_ptables *ptables,
uintptr_t virt, size_t size,
uint64_t desc, bool may_overwrite)
{
uint64_t *table = top_table;
uint64_t *pte;
uint64_t *pte, *ptes[XLAT_LAST_LEVEL + 1];
uint64_t level_size;
uint64_t *table = ptables->base_xlat_table;
unsigned int level = BASE_XLAT_LEVEL;
int ret = 0;
while (size) {
__ASSERT(level <= XLAT_LAST_LEVEL,
@ -298,6 +246,7 @@ static int set_mapping(uint64_t *top_table, uintptr_t virt, size_t size,
/* Locate PTE for given virtual address and page table level */
pte = &table[XLAT_TABLE_VA_IDX(virt, level)];
ptes[level] = pte;
if (is_table_desc(*pte, level)) {
/* Move to the next translation table level */
@ -311,7 +260,8 @@ static int set_mapping(uint64_t *top_table, uintptr_t virt, size_t size,
LOG_ERR("entry already in use: "
"level %d pte %p *pte 0x%016llx",
level, pte, *pte);
return -EBUSY;
ret = -EBUSY;
break;
}
level_size = 1ULL << LEVEL_TO_VA_SIZE_SHIFT(level);
@ -330,7 +280,8 @@ static int set_mapping(uint64_t *top_table, uintptr_t virt, size_t size,
/* Range doesn't fit, create subtable */
table = expand_to_table(pte, level);
if (!table) {
return -ENOMEM;
ret = -ENOMEM;
break;
}
level++;
continue;
@ -340,58 +291,32 @@ static int set_mapping(uint64_t *top_table, uintptr_t virt, size_t size,
if (is_free_desc(*pte)) {
table_usage(pte, 1);
}
/* Create block/page descriptor */
if (!desc) {
table_usage(pte, -1);
}
/* Create (or erase) block/page descriptor */
set_pte_block_desc(pte, desc, level);
/* recursively free unused tables if any */
while (level != BASE_XLAT_LEVEL &&
is_table_unused(pte)) {
free_table(pte);
pte = ptes[--level];
set_pte_block_desc(pte, 0, level);
table_usage(pte, -1);
}
move_on:
virt += level_size;
desc += level_size;
desc += desc ? level_size : 0;
size -= level_size;
/* Range is mapped, start again for next range */
table = top_table;
table = ptables->base_xlat_table;
level = BASE_XLAT_LEVEL;
}
return 0;
}
static void del_mapping(uint64_t *table, uintptr_t virt, size_t size,
unsigned int level)
{
size_t step, level_size = 1ULL << LEVEL_TO_VA_SIZE_SHIFT(level);
uint64_t *pte, *subtable;
for ( ; size; virt += step, size -= step) {
step = level_size - (virt & (level_size - 1));
if (step > size) {
step = size;
}
pte = &table[XLAT_TABLE_VA_IDX(virt, level)];
if (is_free_desc(*pte)) {
continue;
}
if (is_table_desc(*pte, level)) {
subtable = pte_desc_table(*pte);
del_mapping(subtable, virt, step, level + 1);
if (!is_table_unused(subtable)) {
continue;
}
dec_table_ref(subtable);
} else {
/*
* We assume that block mappings will be unmapped
* as a whole and not partially.
*/
__ASSERT(step == level_size, "");
}
/* free this entry */
*pte = 0;
table_usage(pte, -1);
}
return ret;
}
#ifdef CONFIG_USERSPACE
@ -399,7 +324,7 @@ static void del_mapping(uint64_t *table, uintptr_t virt, size_t size,
static uint64_t *dup_table(uint64_t *src_table, unsigned int level)
{
uint64_t *dst_table = new_table();
int i, usage_count = 0;
int i;
if (!dst_table) {
return NULL;
@ -422,14 +347,13 @@ static uint64_t *dup_table(uint64_t *src_table, unsigned int level)
}
dst_table[i] = src_table[i];
if (is_table_desc(dst_table[i], level)) {
inc_table_ref(pte_desc_table(dst_table[i]));
if (is_table_desc(src_table[i], level)) {
table_usage(pte_desc_table(src_table[i]), 1);
}
if (!is_free_desc(dst_table[i])) {
usage_count++;
table_usage(dst_table, 1);
}
}
table_usage(dst_table, usage_count);
return dst_table;
}
@ -464,7 +388,8 @@ static int privatize_table(uint64_t *dst_table, uint64_t *src_table,
return -ENOMEM;
}
set_pte_table_desc(&dst_table[i], dst_subtable, level);
dec_table_ref(src_subtable);
table_usage(dst_subtable, 1);
table_usage(src_subtable, -1);
}
ret = privatize_table(dst_subtable, src_subtable,
@ -508,23 +433,18 @@ static int privatize_page_range(struct arm_mmu_ptables *dst_pt,
static void discard_table(uint64_t *table, unsigned int level)
{
unsigned int i;
int free_count = 0;
for (i = 0U; i < Ln_XLAT_NUM_ENTRIES; i++) {
if (is_table_desc(table[i], level)) {
uint64_t *subtable = pte_desc_table(table[i]);
if (is_table_single_referenced(subtable)) {
discard_table(subtable, level + 1);
}
dec_table_ref(subtable);
table_usage(pte_desc_table(table[i]), -1);
discard_table(pte_desc_table(table[i]), level + 1);
}
if (!is_free_desc(table[i])) {
table[i] = 0U;
free_count++;
table_usage(table, -1);
}
}
table_usage(table, -free_count);
free_table(table);
}
static int globalize_table(uint64_t *dst_table, uint64_t *src_table,
@ -546,20 +466,6 @@ static int globalize_table(uint64_t *dst_table, uint64_t *src_table,
continue;
}
if (is_free_desc(src_table[i]) &&
is_table_desc(dst_table[i], level)) {
uint64_t *subtable = pte_desc_table(dst_table[i]);
del_mapping(subtable, virt, step, level + 1);
if (is_table_unused(subtable)) {
/* unreference the empty table */
dst_table[i] = 0;
table_usage(dst_table, -1);
dec_table_ref(subtable);
}
continue;
}
if (step != level_size) {
/* boundary falls in the middle of this pte */
__ASSERT(is_table_desc(src_table[i], level),
@ -591,15 +497,15 @@ static int globalize_table(uint64_t *dst_table, uint64_t *src_table,
table_usage(dst_table, -1);
}
if (is_table_desc(src_table[i], level)) {
inc_table_ref(pte_desc_table(src_table[i]));
table_usage(pte_desc_table(src_table[i]), 1);
}
dst_table[i] = src_table[i];
debug_show_pte(&dst_table[i], level);
if (old_table) {
/* we can discard the whole branch */
table_usage(old_table, -1);
discard_table(old_table, level + 1);
dec_table_ref(old_table);
}
}
@ -719,7 +625,7 @@ static int __add_map(struct arm_mmu_ptables *ptables, const char *name,
__ASSERT(((virt | phys | size) & (CONFIG_MMU_PAGE_SIZE - 1)) == 0,
"address/size are not page aligned\n");
desc |= phys;
return set_mapping(ptables->base_xlat_table, virt, size, desc, may_overwrite);
return set_mapping(ptables, virt, size, desc, may_overwrite);
}
static int add_map(struct arm_mmu_ptables *ptables, const char *name,
@ -734,18 +640,20 @@ static int add_map(struct arm_mmu_ptables *ptables, const char *name,
return ret;
}
static void remove_map(struct arm_mmu_ptables *ptables, const char *name,
uintptr_t virt, size_t size)
static int remove_map(struct arm_mmu_ptables *ptables, const char *name,
uintptr_t virt, size_t size)
{
k_spinlock_key_t key;
int ret;
MMU_DEBUG("unmmap [%s]: virt %lx size %lx\n", name, virt, size);
__ASSERT(((virt | size) & (CONFIG_MMU_PAGE_SIZE - 1)) == 0,
"address/size are not page aligned\n");
key = k_spin_lock(&xlat_lock);
del_mapping(ptables->base_xlat_table, virt, size, BASE_XLAT_LEVEL);
ret = set_mapping(ptables, virt, size, 0, true);
k_spin_unlock(&xlat_lock, key);
return ret;
}
static void invalidate_tlb_all(void)
@ -984,7 +892,7 @@ void z_arm64_mm_init(bool is_primary_core)
enable_mmu_el1(&kernel_ptables, flags);
}
static void sync_domains(uintptr_t virt, size_t size, const char *name)
static void sync_domains(uintptr_t virt, size_t size)
{
#ifdef CONFIG_USERSPACE
sys_snode_t *node;
@ -998,7 +906,7 @@ static void sync_domains(uintptr_t virt, size_t size, const char *name)
domain = CONTAINER_OF(node, struct arch_mem_domain, node);
domain_ptables = &domain->ptables;
ret = globalize_page_range(domain_ptables, &kernel_ptables,
virt, size, name);
virt, size, "generic");
if (ret) {
LOG_ERR("globalize_page_range() returned %d", ret);
}
@ -1080,7 +988,7 @@ void arch_mem_map(void *virt, uintptr_t phys, size_t size, uint32_t flags)
} else {
uint32_t mem_flags = flags & K_MEM_CACHE_MASK;
sync_domains((uintptr_t)virt, size, "mem_map");
sync_domains((uintptr_t)virt, size);
invalidate_tlb_all();
switch (mem_flags) {
@ -1097,9 +1005,14 @@ void arch_mem_map(void *virt, uintptr_t phys, size_t size, uint32_t flags)
void arch_mem_unmap(void *addr, size_t size)
{
remove_map(&kernel_ptables, "generic", (uintptr_t)addr, size);
sync_domains((uintptr_t)addr, size, "mem_unmap");
invalidate_tlb_all();
int ret = remove_map(&kernel_ptables, "generic", (uintptr_t)addr, size);
if (ret) {
LOG_ERR("remove_map() returned %d", ret);
} else {
sync_domains((uintptr_t)addr, size);
invalidate_tlb_all();
}
}
int arch_page_phys_get(void *virt, uintptr_t *phys)
@ -1118,7 +1031,7 @@ int arch_page_phys_get(void *virt, uintptr_t *phys)
}
if (phys) {
*phys = par & GENMASK64(47, 12);
*phys = par & GENMASK(47, 12);
}
return 0;
}
@ -1317,7 +1230,6 @@ static void z_arm64_swap_ptables(struct k_thread *incoming)
return; /* Already the right tables */
}
MMU_DEBUG("TTBR0 switch from %#llx to %#llx\n", curr_ttbr0, new_ttbr0);
z_arm64_set_ttbr0(new_ttbr0);
if (get_asid(curr_ttbr0) == get_asid(new_ttbr0)) {

View file

@ -93,80 +93,3 @@
#define DESC_ATTRS_LOWER_MASK GENMASK(11, 2)
#define DESC_ATTRS_MASK (DESC_ATTRS_UPPER_MASK | DESC_ATTRS_LOWER_MASK)
/*
* PTE descriptor can be Block descriptor or Table descriptor
* or Page descriptor.
*/
#define PTE_DESC_TYPE_MASK 3ULL
#define PTE_BLOCK_DESC 1ULL
#define PTE_TABLE_DESC 3ULL
#define PTE_PAGE_DESC 3ULL
#define PTE_INVALID_DESC 0ULL
/*
* Block and Page descriptor attributes fields
*/
#define PTE_BLOCK_DESC_MEMTYPE(x) (x << 2)
#define PTE_BLOCK_DESC_NS (1ULL << 5)
#define PTE_BLOCK_DESC_AP_ELx (1ULL << 6)
#define PTE_BLOCK_DESC_AP_EL_HIGHER (0ULL << 6)
#define PTE_BLOCK_DESC_AP_RO (1ULL << 7)
#define PTE_BLOCK_DESC_AP_RW (0ULL << 7)
#define PTE_BLOCK_DESC_NON_SHARE (0ULL << 8)
#define PTE_BLOCK_DESC_OUTER_SHARE (2ULL << 8)
#define PTE_BLOCK_DESC_INNER_SHARE (3ULL << 8)
#define PTE_BLOCK_DESC_AF (1ULL << 10)
#define PTE_BLOCK_DESC_NG (1ULL << 11)
#define PTE_BLOCK_DESC_PXN (1ULL << 53)
#define PTE_BLOCK_DESC_UXN (1ULL << 54)
/*
* Descriptor physical address field bits
*/
#define PTE_PHYSADDR_MASK GENMASK64(47, PAGE_SIZE_SHIFT)
/*
* TCR definitions.
*/
#define TCR_EL1_IPS_SHIFT 32U
#define TCR_EL2_PS_SHIFT 16U
#define TCR_EL3_PS_SHIFT 16U
#define TCR_T0SZ_SHIFT 0U
#define TCR_T0SZ(x) ((64 - (x)) << TCR_T0SZ_SHIFT)
#define TCR_IRGN_NC (0ULL << 8)
#define TCR_IRGN_WBWA (1ULL << 8)
#define TCR_IRGN_WT (2ULL << 8)
#define TCR_IRGN_WBNWA (3ULL << 8)
#define TCR_IRGN_MASK (3ULL << 8)
#define TCR_ORGN_NC (0ULL << 10)
#define TCR_ORGN_WBWA (1ULL << 10)
#define TCR_ORGN_WT (2ULL << 10)
#define TCR_ORGN_WBNWA (3ULL << 10)
#define TCR_ORGN_MASK (3ULL << 10)
#define TCR_SHARED_NON (0ULL << 12)
#define TCR_SHARED_OUTER (2ULL << 12)
#define TCR_SHARED_INNER (3ULL << 12)
#define TCR_TG0_4K (0ULL << 14)
#define TCR_TG0_64K (1ULL << 14)
#define TCR_TG0_16K (2ULL << 14)
#define TCR_EPD1_DISABLE (1ULL << 23)
#define TCR_TG1_16K (1ULL << 30)
#define TCR_TG1_4K (2ULL << 30)
#define TCR_TG1_64K (3ULL << 30)
#define TCR_PS_BITS_4GB 0x0ULL
#define TCR_PS_BITS_64GB 0x1ULL
#define TCR_PS_BITS_1TB 0x2ULL
#define TCR_PS_BITS_4TB 0x3ULL
#define TCR_PS_BITS_16TB 0x4ULL
#define TCR_PS_BITS_256TB 0x5ULL
/*
* ARM guarantees at least 8 ASID bits.
* We may have more available, but do not make use of them for the time being.
*/
#define VM_ASID_BITS 8
#define TTBR_ASID_SHIFT 48

View file

@ -40,7 +40,7 @@ GEN_NAMED_OFFSET_SYM(_callee_saved_t, x27, x27_x28);
GEN_NAMED_OFFSET_SYM(_callee_saved_t, x29, x29_sp_el0);
GEN_NAMED_OFFSET_SYM(_callee_saved_t, sp_elx, sp_elx_lr);
#ifdef CONFIG_FRAME_POINTER
#ifdef CONFIG_ARM64_ENABLE_FRAME_POINTER
GEN_NAMED_OFFSET_SYM(_esf_t, fp, fp);
#endif

View file

@ -21,6 +21,29 @@ extern void z_arm64_mm_init(bool is_primary_core);
__weak void z_arm64_mm_init(bool is_primary_core) { }
/*
* These simple memset/memcpy alternatives are necessary as the optimized
* ones depend on the MMU to be active (see commit c5b898743a20).
*/
void z_early_memset(void *dst, int c, size_t n)
{
uint8_t *d = dst;
while (n--) {
*d++ = c;
}
}
void z_early_memcpy(void *dst, const void *src, size_t n)
{
uint8_t *d = dst;
const uint8_t *s = src;
while (n--) {
*d++ = *s++;
}
}
/**
*
* @brief Prepare to and run C code

View file

@ -16,7 +16,6 @@
#include <zephyr/kernel.h>
#include <zephyr/kernel_structs.h>
#include <ksched.h>
#include <ipi.h>
#include <zephyr/init.h>
#include <zephyr/arch/arm64/mm.h>
#include <zephyr/arch/cpu.h>
@ -181,7 +180,7 @@ void arch_secondary_cpu_init(int cpu_num)
#ifdef CONFIG_SMP
static void send_ipi(unsigned int ipi, uint32_t cpu_bitmap)
static void broadcast_ipi(unsigned int ipi)
{
uint64_t mpidr = MPIDR_TO_CORE(GET_MPIDR());
@ -191,10 +190,6 @@ static void send_ipi(unsigned int ipi, uint32_t cpu_bitmap)
unsigned int num_cpus = arch_num_cpus();
for (int i = 0; i < num_cpus; i++) {
if ((cpu_bitmap & BIT(i)) == 0) {
continue;
}
uint64_t target_mpidr = cpu_map[i];
uint8_t aff0;
@ -214,14 +209,10 @@ void sched_ipi_handler(const void *unused)
z_sched_ipi();
}
void arch_sched_broadcast_ipi(void)
/* arch implementation of sched_ipi */
void arch_sched_ipi(void)
{
send_ipi(SGI_SCHED_IPI, IPI_ALL_CPUS_MASK);
}
void arch_sched_directed_ipi(uint32_t cpu_bitmap)
{
send_ipi(SGI_SCHED_IPI, cpu_bitmap);
broadcast_ipi(SGI_SCHED_IPI);
}
#ifdef CONFIG_USERSPACE
@ -241,7 +232,7 @@ void mem_cfg_ipi_handler(const void *unused)
void z_arm64_mem_cfg_ipi(void)
{
send_ipi(SGI_MMCFG_IPI, IPI_ALL_CPUS_MASK);
broadcast_ipi(SGI_MMCFG_IPI);
}
#endif
@ -311,5 +302,6 @@ int arch_smp_init(void)
return 0;
}
SYS_INIT(arch_smp_init, PRE_KERNEL_2, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif

View file

@ -87,7 +87,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
void *p1, void *p2, void *p3)
{
extern void z_arm64_exit_exc(void);
struct arch_esf *pInitCtx;
z_arch_esf_t *pInitCtx;
/*
* Clean the thread->arch to avoid unexpected behavior because the
@ -102,7 +102,7 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
* dropping into EL0.
*/
pInitCtx = Z_STACK_PTR_TO_FRAME(struct arch_esf, stack_ptr);
pInitCtx = Z_STACK_PTR_TO_FRAME(struct __esf, stack_ptr);
pInitCtx->x0 = (uint64_t)entry;
pInitCtx->x1 = (uint64_t)p1;

View file

@ -72,7 +72,7 @@ _ASM_FILE_PROLOGUE
.endif
#endif
#ifdef CONFIG_FRAME_POINTER
#ifdef CONFIG_ARM64_ENABLE_FRAME_POINTER
str x29, [sp, ___esf_t_fp_OFFSET]
#endif
@ -339,7 +339,7 @@ SECTION_FUNC(TEXT, z_arm64_exit_exc)
ldp x16, x17, [sp, ___esf_t_x16_x17_OFFSET]
ldp x18, lr, [sp, ___esf_t_x18_lr_OFFSET]
#ifdef CONFIG_FRAME_POINTER
#ifdef CONFIG_ARM64_ENABLE_FRAME_POINTER
ldr x29, [sp, ___esf_t_fp_OFFSET]
#endif

View file

@ -36,7 +36,7 @@
extern "C" {
#endif
typedef struct arch_esf _esf_t;
typedef struct __esf _esf_t;
typedef struct __basic_sf _basic_sf_t;
#ifdef __cplusplus

View file

@ -43,7 +43,7 @@ static inline void arch_switch(void *switch_to, void **switched_from)
z_arm64_context_switch(new, old);
}
extern void z_arm64_fatal_error(unsigned int reason, struct arch_esf *esf);
extern void z_arm64_fatal_error(unsigned int reason, z_arch_esf_t *esf);
extern void z_arm64_set_ttbr0(uint64_t ttbr0);
extern void z_arm64_mem_cfg_ipi(void);

View file

@ -15,11 +15,11 @@
struct int_list_header {
uint32_t table_size;
uint32_t offset;
#if defined(CONFIG_ISR_TABLES_LOCAL_DECLARATION)
#if IS_ENABLED(CONFIG_ISR_TABLES_LOCAL_DECLARATION)
uint32_t swi_table_entry_size;
uint32_t shared_isr_table_entry_size;
uint32_t shared_isr_client_num_offset;
#endif /* defined(CONFIG_ISR_TABLES_LOCAL_DECLARATION) */
#endif /* IS_ENABLED(CONFIG_ISR_TABLES_LOCAL_DECLARATION) */
};
/* These values are not included in the resulting binary, but instead form the
@ -29,13 +29,13 @@ struct int_list_header {
Z_GENERIC_SECTION(.irq_info) __used struct int_list_header _iheader = {
.table_size = IRQ_TABLE_SIZE,
.offset = CONFIG_GEN_IRQ_START_VECTOR,
#if defined(CONFIG_ISR_TABLES_LOCAL_DECLARATION)
#if IS_ENABLED(CONFIG_ISR_TABLES_LOCAL_DECLARATION)
.swi_table_entry_size = sizeof(struct _isr_table_entry),
#if defined(CONFIG_SHARED_INTERRUPTS)
#if IS_ENABLED(CONFIG_SHARED_INTERRUPTS)
.shared_isr_table_entry_size = sizeof(struct z_shared_isr_table_entry),
.shared_isr_client_num_offset = offsetof(struct z_shared_isr_table_entry, client_num),
#endif /* defined(CONFIG_SHARED_INTERRUPTS) */
#endif /* defined(CONFIG_ISR_TABLES_LOCAL_DECLARATION) */
#endif /* IS_ENABLED(CONFIG_SHARED_INTERRUPTS) */
#endif /* IS_ENABLED(CONFIG_ISR_TABLES_LOCAL_DECLARATION) */
};
/* These are placeholder tables. They will be replaced by the real tables
@ -90,7 +90,7 @@ uintptr_t __irq_vector_table _irq_vector_table[IRQ_TABLE_SIZE] = {
#ifdef CONFIG_GEN_SW_ISR_TABLE
struct _isr_table_entry __sw_isr_table _sw_isr_table[IRQ_TABLE_SIZE] = {
[0 ...(IRQ_TABLE_SIZE - 1)] = {(const void *)0x42,
&z_irq_spurious},
(void *)&z_irq_spurious},
};
#endif

View file

@ -9,7 +9,7 @@
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
FUNC_NORETURN void z_mips_fatal_error(unsigned int reason,
const struct arch_esf *esf)
const z_arch_esf_t *esf)
{
#ifdef CONFIG_EXCEPTION_DEBUG
if (esf != NULL) {
@ -84,7 +84,7 @@ static char *cause_str(unsigned long cause)
}
}
void _Fault(struct arch_esf *esf)
void _Fault(z_arch_esf_t *esf)
{
unsigned long cause;

View file

@ -14,7 +14,7 @@
#include <mips/regdef.h>
#include <mips/mipsregs.h>
#define ESF_O(FIELD) __struct_arch_esf_##FIELD##_OFFSET
#define ESF_O(FIELD) __z_arch_esf_t_##FIELD##_OFFSET
#define THREAD_O(FIELD) _thread_offset_to_##FIELD
/* Convenience macros for loading/storing register states. */
@ -58,12 +58,12 @@
op v1, ESF_O(v1)(sp) ;
#define STORE_CALLER_SAVED() \
addi sp, sp, -__struct_arch_esf_SIZEOF ;\
addi sp, sp, -__z_arch_esf_t_SIZEOF ;\
DO_CALLER_SAVED(OP_STOREREG) ;
#define LOAD_CALLER_SAVED() \
DO_CALLER_SAVED(OP_LOADREG) ;\
addi sp, sp, __struct_arch_esf_SIZEOF ;
addi sp, sp, __z_arch_esf_t_SIZEOF ;
/* imports */
GTEXT(_Fault)

View file

@ -23,32 +23,32 @@ GEN_OFFSET_SYM(_callee_saved_t, s6);
GEN_OFFSET_SYM(_callee_saved_t, s7);
GEN_OFFSET_SYM(_callee_saved_t, s8);
GEN_OFFSET_STRUCT(arch_esf, ra);
GEN_OFFSET_STRUCT(arch_esf, gp);
GEN_OFFSET_STRUCT(arch_esf, t0);
GEN_OFFSET_STRUCT(arch_esf, t1);
GEN_OFFSET_STRUCT(arch_esf, t2);
GEN_OFFSET_STRUCT(arch_esf, t3);
GEN_OFFSET_STRUCT(arch_esf, t4);
GEN_OFFSET_STRUCT(arch_esf, t5);
GEN_OFFSET_STRUCT(arch_esf, t6);
GEN_OFFSET_STRUCT(arch_esf, t7);
GEN_OFFSET_STRUCT(arch_esf, t8);
GEN_OFFSET_STRUCT(arch_esf, t9);
GEN_OFFSET_STRUCT(arch_esf, a0);
GEN_OFFSET_STRUCT(arch_esf, a1);
GEN_OFFSET_STRUCT(arch_esf, a2);
GEN_OFFSET_STRUCT(arch_esf, a3);
GEN_OFFSET_STRUCT(arch_esf, v0);
GEN_OFFSET_STRUCT(arch_esf, v1);
GEN_OFFSET_STRUCT(arch_esf, at);
GEN_OFFSET_STRUCT(arch_esf, epc);
GEN_OFFSET_STRUCT(arch_esf, badvaddr);
GEN_OFFSET_STRUCT(arch_esf, hi);
GEN_OFFSET_STRUCT(arch_esf, lo);
GEN_OFFSET_STRUCT(arch_esf, status);
GEN_OFFSET_STRUCT(arch_esf, cause);
GEN_OFFSET_SYM(z_arch_esf_t, ra);
GEN_OFFSET_SYM(z_arch_esf_t, gp);
GEN_OFFSET_SYM(z_arch_esf_t, t0);
GEN_OFFSET_SYM(z_arch_esf_t, t1);
GEN_OFFSET_SYM(z_arch_esf_t, t2);
GEN_OFFSET_SYM(z_arch_esf_t, t3);
GEN_OFFSET_SYM(z_arch_esf_t, t4);
GEN_OFFSET_SYM(z_arch_esf_t, t5);
GEN_OFFSET_SYM(z_arch_esf_t, t6);
GEN_OFFSET_SYM(z_arch_esf_t, t7);
GEN_OFFSET_SYM(z_arch_esf_t, t8);
GEN_OFFSET_SYM(z_arch_esf_t, t9);
GEN_OFFSET_SYM(z_arch_esf_t, a0);
GEN_OFFSET_SYM(z_arch_esf_t, a1);
GEN_OFFSET_SYM(z_arch_esf_t, a2);
GEN_OFFSET_SYM(z_arch_esf_t, a3);
GEN_OFFSET_SYM(z_arch_esf_t, v0);
GEN_OFFSET_SYM(z_arch_esf_t, v1);
GEN_OFFSET_SYM(z_arch_esf_t, at);
GEN_OFFSET_SYM(z_arch_esf_t, epc);
GEN_OFFSET_SYM(z_arch_esf_t, badvaddr);
GEN_OFFSET_SYM(z_arch_esf_t, hi);
GEN_OFFSET_SYM(z_arch_esf_t, lo);
GEN_OFFSET_SYM(z_arch_esf_t, status);
GEN_OFFSET_SYM(z_arch_esf_t, cause);
GEN_ABSOLUTE_SYM(__struct_arch_esf_SIZEOF, STACK_ROUND_UP(sizeof(struct arch_esf)));
GEN_ABSOLUTE_SYM(__z_arch_esf_t_SIZEOF, STACK_ROUND_UP(sizeof(z_arch_esf_t)));
GEN_ABS_SYM_END

View file

@ -19,11 +19,11 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
char *stack_ptr, k_thread_entry_t entry,
void *p1, void *p2, void *p3)
{
struct arch_esf *stack_init;
struct __esf *stack_init;
/* Initial stack frame for thread */
stack_init = (struct arch_esf *)Z_STACK_PTR_ALIGN(
Z_STACK_PTR_TO_FRAME(struct arch_esf, stack_ptr)
stack_init = (struct __esf *)Z_STACK_PTR_ALIGN(
Z_STACK_PTR_TO_FRAME(struct __esf, stack_ptr)
);
/* Setup the initial stack frame */

View file

@ -35,7 +35,7 @@ arch_thread_return_value_set(struct k_thread *thread, unsigned int value)
}
FUNC_NORETURN void z_mips_fatal_error(unsigned int reason,
const struct arch_esf *esf);
const z_arch_esf_t *esf);
static inline bool arch_is_in_isr(void)
{

View file

@ -35,35 +35,35 @@ GTEXT(_offload_routine)
*/
SECTION_FUNC(exception.entry, _exception)
/* Reserve thread stack space for saving context */
subi sp, sp, __struct_arch_esf_SIZEOF
subi sp, sp, __z_arch_esf_t_SIZEOF
/* Preserve all caller-saved registers onto the thread's stack */
stw ra, __struct_arch_esf_ra_OFFSET(sp)
stw r1, __struct_arch_esf_r1_OFFSET(sp)
stw r2, __struct_arch_esf_r2_OFFSET(sp)
stw r3, __struct_arch_esf_r3_OFFSET(sp)
stw r4, __struct_arch_esf_r4_OFFSET(sp)
stw r5, __struct_arch_esf_r5_OFFSET(sp)
stw r6, __struct_arch_esf_r6_OFFSET(sp)
stw r7, __struct_arch_esf_r7_OFFSET(sp)
stw r8, __struct_arch_esf_r8_OFFSET(sp)
stw r9, __struct_arch_esf_r9_OFFSET(sp)
stw r10, __struct_arch_esf_r10_OFFSET(sp)
stw r11, __struct_arch_esf_r11_OFFSET(sp)
stw r12, __struct_arch_esf_r12_OFFSET(sp)
stw r13, __struct_arch_esf_r13_OFFSET(sp)
stw r14, __struct_arch_esf_r14_OFFSET(sp)
stw r15, __struct_arch_esf_r15_OFFSET(sp)
stw ra, __z_arch_esf_t_ra_OFFSET(sp)
stw r1, __z_arch_esf_t_r1_OFFSET(sp)
stw r2, __z_arch_esf_t_r2_OFFSET(sp)
stw r3, __z_arch_esf_t_r3_OFFSET(sp)
stw r4, __z_arch_esf_t_r4_OFFSET(sp)
stw r5, __z_arch_esf_t_r5_OFFSET(sp)
stw r6, __z_arch_esf_t_r6_OFFSET(sp)
stw r7, __z_arch_esf_t_r7_OFFSET(sp)
stw r8, __z_arch_esf_t_r8_OFFSET(sp)
stw r9, __z_arch_esf_t_r9_OFFSET(sp)
stw r10, __z_arch_esf_t_r10_OFFSET(sp)
stw r11, __z_arch_esf_t_r11_OFFSET(sp)
stw r12, __z_arch_esf_t_r12_OFFSET(sp)
stw r13, __z_arch_esf_t_r13_OFFSET(sp)
stw r14, __z_arch_esf_t_r14_OFFSET(sp)
stw r15, __z_arch_esf_t_r15_OFFSET(sp)
/* Store value of estatus control register */
rdctl et, estatus
stw et, __struct_arch_esf_estatus_OFFSET(sp)
stw et, __z_arch_esf_t_estatus_OFFSET(sp)
/* ea-4 is the address of the instruction when the exception happened,
* put this in the stack frame as well
*/
addi r15, ea, -4
stw r15, __struct_arch_esf_instr_OFFSET(sp)
stw r15, __z_arch_esf_t_instr_OFFSET(sp)
/* Figure out whether we are here because of an interrupt or an
* exception. If an interrupt, switch stacks and enter IRQ handling
@ -157,7 +157,7 @@ not_interrupt:
*
* We earlier put ea - 4 in the stack frame, replace it with just ea
*/
stw ea, __struct_arch_esf_instr_OFFSET(sp)
stw ea, __z_arch_esf_t_instr_OFFSET(sp)
#ifdef CONFIG_IRQ_OFFLOAD
/* Check the contents of _offload_routine. If non-NULL, jump into
@ -193,35 +193,35 @@ _exception_exit:
* and return to the interrupted context */
/* Return address from the exception */
ldw ea, __struct_arch_esf_instr_OFFSET(sp)
ldw ea, __z_arch_esf_t_instr_OFFSET(sp)
/* Restore estatus
* XXX is this right??? */
ldw r5, __struct_arch_esf_estatus_OFFSET(sp)
ldw r5, __z_arch_esf_t_estatus_OFFSET(sp)
wrctl estatus, r5
/* Restore caller-saved registers */
ldw ra, __struct_arch_esf_ra_OFFSET(sp)
ldw r1, __struct_arch_esf_r1_OFFSET(sp)
ldw r2, __struct_arch_esf_r2_OFFSET(sp)
ldw r3, __struct_arch_esf_r3_OFFSET(sp)
ldw r4, __struct_arch_esf_r4_OFFSET(sp)
ldw r5, __struct_arch_esf_r5_OFFSET(sp)
ldw r6, __struct_arch_esf_r6_OFFSET(sp)
ldw r7, __struct_arch_esf_r7_OFFSET(sp)
ldw r8, __struct_arch_esf_r8_OFFSET(sp)
ldw r9, __struct_arch_esf_r9_OFFSET(sp)
ldw r10, __struct_arch_esf_r10_OFFSET(sp)
ldw r11, __struct_arch_esf_r11_OFFSET(sp)
ldw r12, __struct_arch_esf_r12_OFFSET(sp)
ldw r13, __struct_arch_esf_r13_OFFSET(sp)
ldw r14, __struct_arch_esf_r14_OFFSET(sp)
ldw r15, __struct_arch_esf_r15_OFFSET(sp)
ldw ra, __z_arch_esf_t_ra_OFFSET(sp)
ldw r1, __z_arch_esf_t_r1_OFFSET(sp)
ldw r2, __z_arch_esf_t_r2_OFFSET(sp)
ldw r3, __z_arch_esf_t_r3_OFFSET(sp)
ldw r4, __z_arch_esf_t_r4_OFFSET(sp)
ldw r5, __z_arch_esf_t_r5_OFFSET(sp)
ldw r6, __z_arch_esf_t_r6_OFFSET(sp)
ldw r7, __z_arch_esf_t_r7_OFFSET(sp)
ldw r8, __z_arch_esf_t_r8_OFFSET(sp)
ldw r9, __z_arch_esf_t_r9_OFFSET(sp)
ldw r10, __z_arch_esf_t_r10_OFFSET(sp)
ldw r11, __z_arch_esf_t_r11_OFFSET(sp)
ldw r12, __z_arch_esf_t_r12_OFFSET(sp)
ldw r13, __z_arch_esf_t_r13_OFFSET(sp)
ldw r14, __z_arch_esf_t_r14_OFFSET(sp)
ldw r15, __z_arch_esf_t_r15_OFFSET(sp)
/* Put the stack pointer back where it was when we entered
* exception state
*/
addi sp, sp, __struct_arch_esf_SIZEOF
addi sp, sp, __z_arch_esf_t_SIZEOF
/* All done, copy estatus into status and transfer to ea */
eret

View file

@ -12,7 +12,7 @@
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
FUNC_NORETURN void z_nios2_fatal_error(unsigned int reason,
const struct arch_esf *esf)
const z_arch_esf_t *esf)
{
#if CONFIG_EXCEPTION_DEBUG
if (esf != NULL) {
@ -102,7 +102,7 @@ static char *cause_str(uint32_t cause_code)
}
#endif
FUNC_NORETURN void _Fault(const struct arch_esf *esf)
FUNC_NORETURN void _Fault(const z_arch_esf_t *esf)
{
#if defined(CONFIG_PRINTK) || defined(CONFIG_LOG)
/* Unfortunately, completely unavailable on Nios II/e cores */

View file

@ -44,24 +44,24 @@ GEN_OFFSET_SYM(_callee_saved_t, sp);
GEN_OFFSET_SYM(_callee_saved_t, key);
GEN_OFFSET_SYM(_callee_saved_t, retval);
GEN_OFFSET_STRUCT(arch_esf, ra);
GEN_OFFSET_STRUCT(arch_esf, r1);
GEN_OFFSET_STRUCT(arch_esf, r2);
GEN_OFFSET_STRUCT(arch_esf, r3);
GEN_OFFSET_STRUCT(arch_esf, r4);
GEN_OFFSET_STRUCT(arch_esf, r5);
GEN_OFFSET_STRUCT(arch_esf, r6);
GEN_OFFSET_STRUCT(arch_esf, r7);
GEN_OFFSET_STRUCT(arch_esf, r8);
GEN_OFFSET_STRUCT(arch_esf, r9);
GEN_OFFSET_STRUCT(arch_esf, r10);
GEN_OFFSET_STRUCT(arch_esf, r11);
GEN_OFFSET_STRUCT(arch_esf, r12);
GEN_OFFSET_STRUCT(arch_esf, r13);
GEN_OFFSET_STRUCT(arch_esf, r14);
GEN_OFFSET_STRUCT(arch_esf, r15);
GEN_OFFSET_STRUCT(arch_esf, estatus);
GEN_OFFSET_STRUCT(arch_esf, instr);
GEN_ABSOLUTE_SYM(__struct_arch_esf_SIZEOF, sizeof(struct arch_esf));
GEN_OFFSET_SYM(z_arch_esf_t, ra);
GEN_OFFSET_SYM(z_arch_esf_t, r1);
GEN_OFFSET_SYM(z_arch_esf_t, r2);
GEN_OFFSET_SYM(z_arch_esf_t, r3);
GEN_OFFSET_SYM(z_arch_esf_t, r4);
GEN_OFFSET_SYM(z_arch_esf_t, r5);
GEN_OFFSET_SYM(z_arch_esf_t, r6);
GEN_OFFSET_SYM(z_arch_esf_t, r7);
GEN_OFFSET_SYM(z_arch_esf_t, r8);
GEN_OFFSET_SYM(z_arch_esf_t, r9);
GEN_OFFSET_SYM(z_arch_esf_t, r10);
GEN_OFFSET_SYM(z_arch_esf_t, r11);
GEN_OFFSET_SYM(z_arch_esf_t, r12);
GEN_OFFSET_SYM(z_arch_esf_t, r13);
GEN_OFFSET_SYM(z_arch_esf_t, r14);
GEN_OFFSET_SYM(z_arch_esf_t, r15);
GEN_OFFSET_SYM(z_arch_esf_t, estatus);
GEN_OFFSET_SYM(z_arch_esf_t, instr);
GEN_ABSOLUTE_SYM(__z_arch_esf_t_SIZEOF, sizeof(z_arch_esf_t));
GEN_ABS_SYM_END

View file

@ -39,7 +39,7 @@ arch_thread_return_value_set(struct k_thread *thread, unsigned int value)
}
FUNC_NORETURN void z_nios2_fatal_error(unsigned int reason,
const struct arch_esf *esf);
const z_arch_esf_t *esf);
static inline bool arch_is_in_isr(void)
{

View file

@ -30,6 +30,7 @@ config RISCV_GP
config RISCV_ALWAYS_SWITCH_THROUGH_ECALL
bool "Do not use mret outside a trap handler context"
depends on MULTITHREADING
depends on !RISCV_PMP
help
Use mret instruction only when in a trap handler.
This is for RISC-V implementations that require every mret to be
@ -37,9 +38,19 @@ config RISCV_ALWAYS_SWITCH_THROUGH_ECALL
and most people should say n here to minimize context switching
overhead.
config RISCV_ENABLE_FRAME_POINTER
bool
default y
depends on OVERRIDE_FRAME_POINTER_DEFAULT && !OMIT_FRAME_POINTER
help
Hidden option to simplify access to OVERRIDE_FRAME_POINTER_DEFAULT
and OMIT_FRAME_POINTER. It is automatically enabled when the frame
pointer unwinding is enabled.
config RISCV_EXCEPTION_STACK_TRACE
bool
default y
depends on EXCEPTION_STACK_TRACE
imply THREAD_STACK_INFO
help
Internal config to enable runtime stack traces on fatal exceptions.
@ -47,11 +58,10 @@ config RISCV_EXCEPTION_STACK_TRACE
menu "RISCV Processor Options"
config INCLUDE_RESET_VECTOR
bool "Jumps to __initialize directly"
bool "Include Reset vector"
help
Select 'y' here to use the Zephyr provided default implementation that
jumps to `__initialize` directly. Otherwise a SOC needs to provide its
custom `__reset` routine.
Include the reset vector stub, which initializes the stack and
prepares for running C code.
config RISCV_PRIVILEGED
bool
@ -88,7 +98,7 @@ config RISCV_SOC_HAS_ISR_STACKING
guarded by !_ASMLANGUAGE. The ESF should be defined to account for
the hardware stacked registers in the proper order as they are
saved on the stack by the hardware, and the registers saved by the
software macros. The structure must be called 'struct arch_esf'.
software macros. The structure must be called '__esf'.
config RISCV_SOC_HAS_CUSTOM_IRQ_HANDLING
bool
@ -369,7 +379,6 @@ config ARCH_IRQ_VECTOR_TABLE_ALIGN
config RISCV_TRAP_HANDLER_ALIGNMENT
int "Alignment of RISC-V trap handler in bytes"
default 64 if RISCV_HAS_CLIC
default 4
help
This value configures the alignment of RISC-V trap handling

View file

@ -6,6 +6,7 @@ zephyr_library_sources(
cpu_idle.c
fatal.c
irq_manage.c
isr.S
prep_c.c
reboot.c
reset.S
@ -20,10 +21,9 @@ endif ()
zephyr_library_sources_ifdef(CONFIG_FPU_SHARING fpu.c fpu.S)
zephyr_library_sources_ifdef(CONFIG_DEBUG_COREDUMP coredump.c)
zephyr_library_sources_ifdef(CONFIG_IRQ_OFFLOAD irq_offload.c)
zephyr_library_sources_ifdef(CONFIG_GEN_SW_ISR_TABLE isr.S)
zephyr_library_sources_ifdef(CONFIG_RISCV_PMP pmp.c pmp.S)
zephyr_library_sources_ifdef(CONFIG_THREAD_LOCAL_STORAGE tls.c)
zephyr_library_sources_ifdef(CONFIG_USERSPACE userspace.S)
zephyr_library_sources_ifdef(CONFIG_SEMIHOST semihost.c)
zephyr_library_sources_ifdef(CONFIG_EXCEPTION_STACK_TRACE stacktrace.c)
zephyr_library_sources_ifdef(CONFIG_RISCV_EXCEPTION_STACK_TRACE stacktrace.c)
zephyr_linker_sources(ROM_START SORT_KEY 0x0vectors vector_table.ld)

View file

@ -67,7 +67,7 @@ struct riscv_arch_block {
*/
static struct riscv_arch_block arch_blk;
void arch_coredump_info_dump(const struct arch_esf *esf)
void arch_coredump_info_dump(const z_arch_esf_t *esf)
{
struct coredump_arch_hdr_t hdr = {
.id = COREDUMP_ARCH_HDR_ID,

View file

@ -4,6 +4,7 @@
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/debug/symtab.h>
#include <zephyr/kernel.h>
#include <zephyr/kernel_structs.h>
#include <kernel_internal.h>
@ -29,15 +30,15 @@ static const struct z_exc_handle exceptions[] = {
#endif
/* Stack trace function */
void z_riscv_unwind_stack(const struct arch_esf *esf, const _callee_saved_t *csf);
void z_riscv_unwind_stack(const z_arch_esf_t *esf);
uintptr_t z_riscv_get_sp_before_exc(const struct arch_esf *esf)
uintptr_t z_riscv_get_sp_before_exc(const z_arch_esf_t *esf)
{
/*
* Kernel stack pointer prior this exception i.e. before
* storing the exception stack frame.
*/
uintptr_t sp = (uintptr_t)esf + sizeof(struct arch_esf);
uintptr_t sp = (uintptr_t)esf + sizeof(z_arch_esf_t);
#ifdef CONFIG_USERSPACE
if ((esf->mstatus & MSTATUS_MPP) == PRV_U) {
@ -53,12 +54,12 @@ uintptr_t z_riscv_get_sp_before_exc(const struct arch_esf *esf)
}
FUNC_NORETURN void z_riscv_fatal_error(unsigned int reason,
const struct arch_esf *esf)
const z_arch_esf_t *esf)
{
z_riscv_fatal_error_csf(reason, esf, NULL);
}
FUNC_NORETURN void z_riscv_fatal_error_csf(unsigned int reason, const struct arch_esf *esf,
FUNC_NORETURN void z_riscv_fatal_error_csf(unsigned int reason, const z_arch_esf_t *esf,
const _callee_saved_t *csf)
{
#ifdef CONFIG_EXCEPTION_DEBUG
@ -79,7 +80,14 @@ FUNC_NORETURN void z_riscv_fatal_error_csf(unsigned int reason, const struct arc
#endif /* CONFIG_RISCV_ISA_RV32E */
LOG_ERR(" sp: " PR_REG, z_riscv_get_sp_before_exc(esf));
LOG_ERR(" ra: " PR_REG, esf->ra);
#ifndef CONFIG_SYMTAB
LOG_ERR(" mepc: " PR_REG, esf->mepc);
#else
uint32_t offset = 0;
const char *name = symtab_find_symbol_name(esf->mepc, &offset);
LOG_ERR(" mepc: " PR_REG " [%s+0x%x]", esf->mepc, name, offset);
#endif
LOG_ERR("mstatus: " PR_REG, esf->mstatus);
LOG_ERR("");
}
@ -99,8 +107,8 @@ FUNC_NORETURN void z_riscv_fatal_error_csf(unsigned int reason, const struct arc
LOG_ERR("");
}
if (IS_ENABLED(CONFIG_EXCEPTION_STACK_TRACE)) {
z_riscv_unwind_stack(esf, csf);
if (IS_ENABLED(CONFIG_RISCV_EXCEPTION_STACK_TRACE) && (esf != NULL)) {
z_riscv_unwind_stack(esf);
}
#endif /* CONFIG_EXCEPTION_DEBUG */
@ -144,14 +152,14 @@ static char *cause_str(unsigned long cause)
}
}
static bool bad_stack_pointer(struct arch_esf *esf)
static bool bad_stack_pointer(z_arch_esf_t *esf)
{
#ifdef CONFIG_PMP_STACK_GUARD
/*
* Check if the kernel stack pointer prior this exception (before
* storing the exception stack frame) was in the stack guard area.
*/
uintptr_t sp = (uintptr_t)esf + sizeof(struct arch_esf);
uintptr_t sp = (uintptr_t)esf + sizeof(z_arch_esf_t);
#ifdef CONFIG_USERSPACE
if (_current->arch.priv_stack_start != 0 &&
@ -189,7 +197,7 @@ static bool bad_stack_pointer(struct arch_esf *esf)
return false;
}
void _Fault(struct arch_esf *esf)
void _Fault(z_arch_esf_t *esf)
{
#ifdef CONFIG_USERSPACE
/*
@ -241,7 +249,7 @@ FUNC_NORETURN void arch_syscall_oops(void *ssf_ptr)
void z_impl_user_fault(unsigned int reason)
{
struct arch_esf *oops_esf = _current->syscall_frame;
z_arch_esf_t *oops_esf = _current->syscall_frame;
if (((_current->base.user_options & K_USER) != 0) &&
reason != K_ERR_STACK_CHK_FAIL) {

View file

@ -204,7 +204,7 @@ void z_riscv_fpu_enter_exc(void)
* Note that the exception depth count was not incremented before this call
* as no further exceptions are expected before returning to normal mode.
*/
void z_riscv_fpu_trap(struct arch_esf *esf)
void z_riscv_fpu_trap(z_arch_esf_t *esf)
{
__ASSERT((esf->mstatus & MSTATUS_FS) == 0 &&
(csr_read(mstatus) & MSTATUS_FS) == 0,
@ -293,7 +293,7 @@ static bool fpu_access_allowed(unsigned int exc_update_level)
* This is called on every exception exit except for z_riscv_fpu_trap().
* In that case the exception level of interest is 1 (soon to be 0).
*/
void z_riscv_fpu_exit_exc(struct arch_esf *esf)
void z_riscv_fpu_exit_exc(z_arch_esf_t *esf)
{
if (fpu_access_allowed(1)) {
esf->mstatus &= ~MSTATUS_FS;

View file

@ -24,22 +24,22 @@
/* Convenience macro for loading/storing register states. */
#define DO_CALLER_SAVED(op) \
RV_E( op t0, __struct_arch_esf_t0_OFFSET(sp) );\
RV_E( op t1, __struct_arch_esf_t1_OFFSET(sp) );\
RV_E( op t2, __struct_arch_esf_t2_OFFSET(sp) );\
RV_I( op t3, __struct_arch_esf_t3_OFFSET(sp) );\
RV_I( op t4, __struct_arch_esf_t4_OFFSET(sp) );\
RV_I( op t5, __struct_arch_esf_t5_OFFSET(sp) );\
RV_I( op t6, __struct_arch_esf_t6_OFFSET(sp) );\
RV_E( op a0, __struct_arch_esf_a0_OFFSET(sp) );\
RV_E( op a1, __struct_arch_esf_a1_OFFSET(sp) );\
RV_E( op a2, __struct_arch_esf_a2_OFFSET(sp) );\
RV_E( op a3, __struct_arch_esf_a3_OFFSET(sp) );\
RV_E( op a4, __struct_arch_esf_a4_OFFSET(sp) );\
RV_E( op a5, __struct_arch_esf_a5_OFFSET(sp) );\
RV_I( op a6, __struct_arch_esf_a6_OFFSET(sp) );\
RV_I( op a7, __struct_arch_esf_a7_OFFSET(sp) );\
RV_E( op ra, __struct_arch_esf_ra_OFFSET(sp) )
RV_E( op t0, __z_arch_esf_t_t0_OFFSET(sp) );\
RV_E( op t1, __z_arch_esf_t_t1_OFFSET(sp) );\
RV_E( op t2, __z_arch_esf_t_t2_OFFSET(sp) );\
RV_I( op t3, __z_arch_esf_t_t3_OFFSET(sp) );\
RV_I( op t4, __z_arch_esf_t_t4_OFFSET(sp) );\
RV_I( op t5, __z_arch_esf_t_t5_OFFSET(sp) );\
RV_I( op t6, __z_arch_esf_t_t6_OFFSET(sp) );\
RV_E( op a0, __z_arch_esf_t_a0_OFFSET(sp) );\
RV_E( op a1, __z_arch_esf_t_a1_OFFSET(sp) );\
RV_E( op a2, __z_arch_esf_t_a2_OFFSET(sp) );\
RV_E( op a3, __z_arch_esf_t_a3_OFFSET(sp) );\
RV_E( op a4, __z_arch_esf_t_a4_OFFSET(sp) );\
RV_E( op a5, __z_arch_esf_t_a5_OFFSET(sp) );\
RV_I( op a6, __z_arch_esf_t_a6_OFFSET(sp) );\
RV_I( op a7, __z_arch_esf_t_a7_OFFSET(sp) );\
RV_E( op ra, __z_arch_esf_t_ra_OFFSET(sp) )
#ifdef CONFIG_EXCEPTION_DEBUG
/* Convenience macro for storing callee saved register [s0 - s11] states. */
@ -157,7 +157,7 @@ SECTION_FUNC(exception.entry, _isr_wrapper)
/* Save user stack value. Coming from user space, we know this
* can't overflow the privileged stack. The esf will be allocated
* later but it is safe to store our saved user sp here. */
sr t0, (-__struct_arch_esf_SIZEOF + __struct_arch_esf_sp_OFFSET)(sp)
sr t0, (-__z_arch_esf_t_SIZEOF + __z_arch_esf_t_sp_OFFSET)(sp)
/* Make sure tls pointer is sane */
lr t0, ___cpu_t_current_OFFSET(s0)
@ -180,21 +180,21 @@ SECTION_FUNC(exception.entry, _isr_wrapper)
SOC_ISR_SW_STACKING
#else
/* Save caller-saved registers on current thread stack. */
addi sp, sp, -__struct_arch_esf_SIZEOF
addi sp, sp, -__z_arch_esf_t_SIZEOF
DO_CALLER_SAVED(sr) ;
#endif /* CONFIG_RISCV_SOC_HAS_ISR_STACKING */
/* Save s0 in the esf and load it with &_current_cpu. */
sr s0, __struct_arch_esf_s0_OFFSET(sp)
sr s0, __z_arch_esf_t_s0_OFFSET(sp)
get_current_cpu s0
/* Save MEPC register */
csrr t0, mepc
sr t0, __struct_arch_esf_mepc_OFFSET(sp)
sr t0, __z_arch_esf_t_mepc_OFFSET(sp)
/* Save MSTATUS register */
csrr t2, mstatus
sr t2, __struct_arch_esf_mstatus_OFFSET(sp)
sr t2, __z_arch_esf_t_mstatus_OFFSET(sp)
#if defined(CONFIG_FPU_SHARING)
/* determine if FPU access was disabled */
@ -301,7 +301,7 @@ no_fp: /* increment _current->arch.exception_depth */
#ifdef CONFIG_RISCV_SOC_CONTEXT_SAVE
/* Handle context saving at SOC level. */
addi a0, sp, __struct_arch_esf_soc_context_OFFSET
addi a0, sp, __z_arch_esf_t_soc_context_OFFSET
jal ra, __soc_save_context
#endif /* CONFIG_RISCV_SOC_CONTEXT_SAVE */
@ -351,7 +351,7 @@ no_fp: /* increment _current->arch.exception_depth */
/*
* Call _Fault to handle exception.
* Stack pointer is pointing to a struct_arch_esf structure, pass it
* Stack pointer is pointing to a z_arch_esf_t structure, pass it
* to _Fault (via register a0).
* If _Fault shall return, set return address to
* no_reschedule to restore stack.
@ -370,9 +370,9 @@ is_kernel_syscall:
* It's safe to always increment by 4, even with compressed
* instructions, because the ecall instruction is always 4 bytes.
*/
lr t0, __struct_arch_esf_mepc_OFFSET(sp)
lr t0, __z_arch_esf_t_mepc_OFFSET(sp)
addi t0, t0, 4
sr t0, __struct_arch_esf_mepc_OFFSET(sp)
sr t0, __z_arch_esf_t_mepc_OFFSET(sp)
#ifdef CONFIG_PMP_STACK_GUARD
/* Re-activate PMP for m-mode */
@ -383,7 +383,7 @@ is_kernel_syscall:
#endif
/* Determine what to do. Operation code is in t0. */
lr t0, __struct_arch_esf_t0_OFFSET(sp)
lr t0, __z_arch_esf_t_t0_OFFSET(sp)
.if RV_ECALL_RUNTIME_EXCEPT != 0; .err; .endif
beqz t0, do_fault
@ -396,24 +396,8 @@ is_kernel_syscall:
#ifdef CONFIG_RISCV_ALWAYS_SWITCH_THROUGH_ECALL
li t1, RV_ECALL_SCHEDULE
bne t0, t1, skip_schedule
lr a0, __struct_arch_esf_a0_OFFSET(sp)
lr a1, __struct_arch_esf_a1_OFFSET(sp)
#ifdef CONFIG_FPU_SHARING
/*
* When an ECALL is used for a context-switch, the current thread has
* been updated to the next thread.
* Add the exception_depth back to the previous thread.
*/
lb t1, _thread_offset_to_exception_depth(a0)
add t1, t1, -1
sb t1, _thread_offset_to_exception_depth(a0)
lb t1, _thread_offset_to_exception_depth(a1)
add t1, t1, 1
sb t1, _thread_offset_to_exception_depth(a1)
#endif
lr a0, __z_arch_esf_t_a0_OFFSET(sp)
lr a1, __z_arch_esf_t_a1_OFFSET(sp)
j reschedule
skip_schedule:
#endif
@ -424,7 +408,7 @@ skip_schedule:
do_fault:
/* Handle RV_ECALL_RUNTIME_EXCEPT. Retrieve reason in a0, esf in A1. */
lr a0, __struct_arch_esf_a0_OFFSET(sp)
lr a0, __z_arch_esf_t_a0_OFFSET(sp)
1: mv a1, sp
#ifdef CONFIG_EXCEPTION_DEBUG
@ -447,8 +431,8 @@ do_irq_offload:
* Routine pointer is in saved a0, argument in saved a1
* so we load them with a1/a0 (reversed).
*/
lr a1, __struct_arch_esf_a0_OFFSET(sp)
lr a0, __struct_arch_esf_a1_OFFSET(sp)
lr a1, __z_arch_esf_t_a0_OFFSET(sp)
lr a0, __z_arch_esf_t_a1_OFFSET(sp)
/* Increment _current_cpu->nested */
lw t1, ___cpu_t_nested_OFFSET(s0)
@ -490,18 +474,18 @@ is_user_syscall:
* Same as for is_kernel_syscall: increment saved MEPC by 4 to
* prevent triggering the same ecall again upon exiting the ISR.
*/
lr t1, __struct_arch_esf_mepc_OFFSET(sp)
lr t1, __z_arch_esf_t_mepc_OFFSET(sp)
addi t1, t1, 4
sr t1, __struct_arch_esf_mepc_OFFSET(sp)
sr t1, __z_arch_esf_t_mepc_OFFSET(sp)
/* Restore argument registers from user stack */
lr a0, __struct_arch_esf_a0_OFFSET(sp)
lr a1, __struct_arch_esf_a1_OFFSET(sp)
lr a2, __struct_arch_esf_a2_OFFSET(sp)
lr a3, __struct_arch_esf_a3_OFFSET(sp)
lr a4, __struct_arch_esf_a4_OFFSET(sp)
lr a5, __struct_arch_esf_a5_OFFSET(sp)
lr t0, __struct_arch_esf_t0_OFFSET(sp)
lr a0, __z_arch_esf_t_a0_OFFSET(sp)
lr a1, __z_arch_esf_t_a1_OFFSET(sp)
lr a2, __z_arch_esf_t_a2_OFFSET(sp)
lr a3, __z_arch_esf_t_a3_OFFSET(sp)
lr a4, __z_arch_esf_t_a4_OFFSET(sp)
lr a5, __z_arch_esf_t_a5_OFFSET(sp)
lr t0, __z_arch_esf_t_t0_OFFSET(sp)
#if defined(CONFIG_RISCV_ISA_RV32E)
/* Stack alignment for RV32E is 4 bytes */
addi sp, sp, -4
@ -535,7 +519,7 @@ valid_syscall_id:
#endif /* CONFIG_RISCV_ISA_RV32E */
/* Update a0 (return value) on the stack */
sr a0, __struct_arch_esf_a0_OFFSET(sp)
sr a0, __z_arch_esf_t_a0_OFFSET(sp)
/* Disable IRQs again before leaving */
csrc mstatus, MSTATUS_IEN
@ -550,7 +534,7 @@ is_interrupt:
* If we came from userspace then we need to reconfigure the
* PMP for kernel mode stack guard.
*/
lr t0, __struct_arch_esf_mstatus_OFFSET(sp)
lr t0, __z_arch_esf_t_mstatus_OFFSET(sp)
li t1, MSTATUS_MPP
and t0, t0, t1
bnez t0, 1f
@ -681,7 +665,7 @@ no_reschedule:
#ifdef CONFIG_RISCV_SOC_CONTEXT_SAVE
/* Restore context at SOC level */
addi a0, sp, __struct_arch_esf_soc_context_OFFSET
addi a0, sp, __z_arch_esf_t_soc_context_OFFSET
jal ra, __soc_restore_context
#endif /* CONFIG_RISCV_SOC_CONTEXT_SAVE */
@ -699,8 +683,8 @@ fp_trap_exit:
#endif
/* Restore MEPC and MSTATUS registers */
lr t0, __struct_arch_esf_mepc_OFFSET(sp)
lr t2, __struct_arch_esf_mstatus_OFFSET(sp)
lr t0, __z_arch_esf_t_mepc_OFFSET(sp)
lr t2, __z_arch_esf_t_mstatus_OFFSET(sp)
csrw mepc, t0
csrw mstatus, t2
@ -727,7 +711,7 @@ fp_trap_exit:
sb t1, %tprel_lo(is_user_mode)(t0)
/* preserve stack pointer for next exception entry */
add t0, sp, __struct_arch_esf_SIZEOF
add t0, sp, __z_arch_esf_t_SIZEOF
sr t0, _curr_cpu_arch_user_exc_sp(s0)
j 2f
@ -736,13 +720,13 @@ fp_trap_exit:
* We are returning to kernel mode. Store the stack pointer to
* be re-loaded further down.
*/
addi t0, sp, __struct_arch_esf_SIZEOF
sr t0, __struct_arch_esf_sp_OFFSET(sp)
addi t0, sp, __z_arch_esf_t_SIZEOF
sr t0, __z_arch_esf_t_sp_OFFSET(sp)
2:
#endif
/* Restore s0 (it is no longer ours) */
lr s0, __struct_arch_esf_s0_OFFSET(sp)
lr s0, __z_arch_esf_t_s0_OFFSET(sp)
#ifdef CONFIG_RISCV_SOC_HAS_ISR_STACKING
SOC_ISR_SW_UNSTACKING
@ -752,10 +736,10 @@ fp_trap_exit:
#ifdef CONFIG_USERSPACE
/* retrieve saved stack pointer */
lr sp, __struct_arch_esf_sp_OFFSET(sp)
lr sp, __z_arch_esf_t_sp_OFFSET(sp)
#else
/* remove esf from the stack */
addi sp, sp, __struct_arch_esf_SIZEOF
addi sp, sp, __z_arch_esf_t_SIZEOF
#endif
#endif /* CONFIG_RISCV_SOC_HAS_ISR_STACKING */

View file

@ -13,7 +13,6 @@
* structures.
*/
#include <zephyr/arch/exception.h>
#include <zephyr/kernel.h>
#include <kernel_arch_data.h>
#include <gen_offset.h>
@ -89,43 +88,43 @@ GEN_OFFSET_SYM(_thread_arch_t, exception_depth);
#endif /* CONFIG_FPU_SHARING */
/* esf member offsets */
GEN_OFFSET_STRUCT(arch_esf, ra);
GEN_OFFSET_STRUCT(arch_esf, t0);
GEN_OFFSET_STRUCT(arch_esf, t1);
GEN_OFFSET_STRUCT(arch_esf, t2);
GEN_OFFSET_STRUCT(arch_esf, a0);
GEN_OFFSET_STRUCT(arch_esf, a1);
GEN_OFFSET_STRUCT(arch_esf, a2);
GEN_OFFSET_STRUCT(arch_esf, a3);
GEN_OFFSET_STRUCT(arch_esf, a4);
GEN_OFFSET_STRUCT(arch_esf, a5);
GEN_OFFSET_SYM(z_arch_esf_t, ra);
GEN_OFFSET_SYM(z_arch_esf_t, t0);
GEN_OFFSET_SYM(z_arch_esf_t, t1);
GEN_OFFSET_SYM(z_arch_esf_t, t2);
GEN_OFFSET_SYM(z_arch_esf_t, a0);
GEN_OFFSET_SYM(z_arch_esf_t, a1);
GEN_OFFSET_SYM(z_arch_esf_t, a2);
GEN_OFFSET_SYM(z_arch_esf_t, a3);
GEN_OFFSET_SYM(z_arch_esf_t, a4);
GEN_OFFSET_SYM(z_arch_esf_t, a5);
#if !defined(CONFIG_RISCV_ISA_RV32E)
GEN_OFFSET_STRUCT(arch_esf, t3);
GEN_OFFSET_STRUCT(arch_esf, t4);
GEN_OFFSET_STRUCT(arch_esf, t5);
GEN_OFFSET_STRUCT(arch_esf, t6);
GEN_OFFSET_STRUCT(arch_esf, a6);
GEN_OFFSET_STRUCT(arch_esf, a7);
GEN_OFFSET_SYM(z_arch_esf_t, t3);
GEN_OFFSET_SYM(z_arch_esf_t, t4);
GEN_OFFSET_SYM(z_arch_esf_t, t5);
GEN_OFFSET_SYM(z_arch_esf_t, t6);
GEN_OFFSET_SYM(z_arch_esf_t, a6);
GEN_OFFSET_SYM(z_arch_esf_t, a7);
#endif /* !CONFIG_RISCV_ISA_RV32E */
GEN_OFFSET_STRUCT(arch_esf, mepc);
GEN_OFFSET_STRUCT(arch_esf, mstatus);
GEN_OFFSET_SYM(z_arch_esf_t, mepc);
GEN_OFFSET_SYM(z_arch_esf_t, mstatus);
GEN_OFFSET_STRUCT(arch_esf, s0);
GEN_OFFSET_SYM(z_arch_esf_t, s0);
#ifdef CONFIG_USERSPACE
GEN_OFFSET_STRUCT(arch_esf, sp);
GEN_OFFSET_SYM(z_arch_esf_t, sp);
#endif
#if defined(CONFIG_RISCV_SOC_CONTEXT_SAVE)
GEN_OFFSET_STRUCT(arch_esf, soc_context);
GEN_OFFSET_SYM(z_arch_esf_t, soc_context);
#endif
#if defined(CONFIG_RISCV_SOC_OFFSETS)
GEN_SOC_OFFSET_SYMS();
#endif
GEN_ABSOLUTE_SYM(__struct_arch_esf_SIZEOF, sizeof(struct arch_esf));
GEN_ABSOLUTE_SYM(__z_arch_esf_t_SIZEOF, sizeof(z_arch_esf_t));
#ifdef CONFIG_EXCEPTION_DEBUG
GEN_ABSOLUTE_SYM(__callee_saved_t_SIZEOF, ROUND_UP(sizeof(_callee_saved_t), ARCH_STACK_PTR_ALIGN));

View file

@ -204,34 +204,6 @@ static bool set_pmp_entry(unsigned int *index_p, uint8_t perm,
return ok;
}
static inline bool set_pmp_mprv_catchall(unsigned int *index_p,
unsigned long *pmp_addr, unsigned long *pmp_cfg,
unsigned int index_limit)
{
/*
* We'll be using MPRV. Make a fallback entry with everything
* accessible as if no PMP entries were matched which is otherwise
* the default behavior for m-mode without MPRV.
*/
bool ok = set_pmp_entry(index_p, PMP_R | PMP_W | PMP_X,
0, 0, pmp_addr, pmp_cfg, index_limit);
#ifdef CONFIG_QEMU_TARGET
if (ok) {
/*
* Workaround: The above produced 0x1fffffff which is correct.
* But there is a QEMU bug that prevents it from interpreting
* this value correctly. Hardcode the special case used by
* QEMU to bypass this bug for now. The QEMU fix is here:
* https://lists.gnu.org/archive/html/qemu-devel/2022-04/msg00961.html
*/
pmp_addr[*index_p - 1] = -1L;
}
#endif
return ok;
}
/**
* @brief Write a range of PMP entries to corresponding PMP registers
*
@ -348,8 +320,8 @@ static unsigned int global_pmp_end_index;
*/
void z_riscv_pmp_init(void)
{
unsigned long pmp_addr[5];
unsigned long pmp_cfg[2];
unsigned long pmp_addr[4];
unsigned long pmp_cfg[1];
unsigned int index = 0;
/* The read-only area is always there for every mode */
@ -379,28 +351,10 @@ void z_riscv_pmp_init(void)
(uintptr_t)z_interrupt_stacks[_current_cpu->id],
Z_RISCV_STACK_GUARD_SIZE,
pmp_addr, pmp_cfg, ARRAY_SIZE(pmp_addr));
/*
* This early, the kernel init code uses the IRQ stack and we want to
* safeguard it as soon as possible. But we need a temporary default
* "catch all" PMP entry for MPRV to work. Later on, this entry will
* be set for each thread by z_riscv_pmp_stackguard_prepare().
*/
set_pmp_mprv_catchall(&index, pmp_addr, pmp_cfg, ARRAY_SIZE(pmp_addr));
/* Write those entries to PMP regs. */
write_pmp_entries(0, index, true, pmp_addr, pmp_cfg, ARRAY_SIZE(pmp_addr));
/* Activate our non-locked PMP entries for m-mode */
csr_set(mstatus, MSTATUS_MPRV);
/* And forget about that last entry as we won't need it later */
index--;
#else
/* Write those entries to PMP regs. */
write_pmp_entries(0, index, true, pmp_addr, pmp_cfg, ARRAY_SIZE(pmp_addr));
#endif
write_pmp_entries(0, index, true, pmp_addr, pmp_cfg, ARRAY_SIZE(pmp_addr));
#ifdef CONFIG_SMP
#ifdef CONFIG_PMP_STACK_GUARD
/*
@ -419,7 +373,6 @@ void z_riscv_pmp_init(void)
}
#endif
__ASSERT(index <= PMPCFG_STRIDE, "provision for one global word only");
global_pmp_cfg[0] = pmp_cfg[0];
global_pmp_last_addr = pmp_addr[index - 1];
global_pmp_end_index = index;
@ -476,7 +429,24 @@ void z_riscv_pmp_stackguard_prepare(struct k_thread *thread)
set_pmp_entry(&index, PMP_NONE,
stack_bottom, Z_RISCV_STACK_GUARD_SIZE,
PMP_M_MODE(thread));
set_pmp_mprv_catchall(&index, PMP_M_MODE(thread));
/*
* We'll be using MPRV. Make a fallback entry with everything
* accessible as if no PMP entries were matched which is otherwise
* the default behavior for m-mode without MPRV.
*/
set_pmp_entry(&index, PMP_R | PMP_W | PMP_X,
0, 0, PMP_M_MODE(thread));
#ifdef CONFIG_QEMU_TARGET
/*
* Workaround: The above produced 0x1fffffff which is correct.
* But there is a QEMU bug that prevents it from interpreting this
* value correctly. Hardcode the special case used by QEMU to
* bypass this bug for now. The QEMU fix is here:
* https://lists.gnu.org/archive/html/qemu-devel/2022-04/msg00961.html
*/
thread->arch.m_mode_pmpaddr_regs[index-1] = -1L;
#endif
/* remember how many entries we use */
thread->arch.m_mode_pmp_end_index = index;

View file

@ -7,7 +7,6 @@
#include <zephyr/init.h>
#include <zephyr/kernel.h>
#include <ksched.h>
#include <ipi.h>
#include <zephyr/irq.h>
#include <zephyr/sys/atomic.h>
#include <zephyr/arch/riscv/irq.h>
@ -87,15 +86,14 @@ static atomic_val_t cpu_pending_ipi[CONFIG_MP_MAX_NUM_CPUS];
#define IPI_SCHED 0
#define IPI_FPU_FLUSH 1
void arch_sched_directed_ipi(uint32_t cpu_bitmap)
void arch_sched_ipi(void)
{
unsigned int key = arch_irq_lock();
unsigned int id = _current_cpu->id;
unsigned int num_cpus = arch_num_cpus();
for (unsigned int i = 0; i < num_cpus; i++) {
if ((i != id) && _kernel.cpus[i].arch.online &&
((cpu_bitmap & BIT(i)) != 0)) {
if (i != id && _kernel.cpus[i].arch.online) {
atomic_set_bit(&cpu_pending_ipi[i], IPI_SCHED);
MSIP(_kernel.cpus[i].arch.hartid) = 1;
}
@ -104,11 +102,6 @@ void arch_sched_directed_ipi(uint32_t cpu_bitmap)
arch_irq_unlock(key);
}
void arch_sched_broadcast_ipi(void)
{
arch_sched_directed_ipi(IPI_ALL_CPUS_MASK);
}
#ifdef CONFIG_FPU_SHARING
void arch_flush_fpu_ipi(unsigned int cpu)
{
@ -172,4 +165,5 @@ int arch_smp_init(void)
return 0;
}
SYS_INIT(arch_smp_init, PRE_KERNEL_2, CONFIG_KERNEL_INIT_PRIORITY_DEFAULT);
#endif /* CONFIG_SMP */

View file

@ -12,96 +12,70 @@
LOG_MODULE_DECLARE(os, CONFIG_KERNEL_LOG_LEVEL);
uintptr_t z_riscv_get_sp_before_exc(const struct arch_esf *esf);
uintptr_t z_riscv_get_sp_before_exc(const z_arch_esf_t *esf);
#define MAX_STACK_FRAMES \
MAX(CONFIG_EXCEPTION_STACK_TRACE_MAX_FRAMES, CONFIG_ARCH_STACKWALK_MAX_FRAMES)
#if __riscv_xlen == 32
#define PR_REG "%08" PRIxPTR
#elif __riscv_xlen == 64
#define PR_REG "%016" PRIxPTR
#endif
#define MAX_STACK_FRAMES CONFIG_EXCEPTION_STACK_TRACE_MAX_FRAMES
struct stackframe {
uintptr_t fp;
uintptr_t ra;
};
typedef bool (*stack_verify_fn)(uintptr_t, const struct k_thread *const, const struct arch_esf *);
#ifdef CONFIG_RISCV_ENABLE_FRAME_POINTER
#define SFP_FMT "fp: "
#else
#define SFP_FMT "sp: "
#endif
static inline bool in_irq_stack_bound(uintptr_t addr, uint8_t cpu_id)
{
uintptr_t start, end;
#ifdef CONFIG_EXCEPTION_STACK_TRACE_SYMTAB
#define LOG_STACK_TRACE(idx, sfp, ra, name, offset) \
LOG_ERR(" %2d: " SFP_FMT PR_REG " ra: " PR_REG " [%s+0x%x]", idx, sfp, ra, name, \
offset)
#else
#define LOG_STACK_TRACE(idx, sfp, ra, name, offset) \
LOG_ERR(" %2d: " SFP_FMT PR_REG " ra: " PR_REG, idx, sfp, ra)
#endif
start = (uintptr_t)K_KERNEL_STACK_BUFFER(z_interrupt_stacks[cpu_id]);
end = start + CONFIG_ISR_STACK_SIZE;
return (addr >= start) && (addr < end);
}
static inline bool in_kernel_thread_stack_bound(uintptr_t addr, const struct k_thread *const thread)
static bool in_stack_bound(uintptr_t addr, const z_arch_esf_t *esf)
{
#ifdef CONFIG_THREAD_STACK_INFO
uintptr_t start, end;
start = thread->stack_info.start;
end = Z_STACK_PTR_ALIGN(thread->stack_info.start + thread->stack_info.size);
if (_current == NULL || arch_is_in_isr()) {
/* We were servicing an interrupt */
uint8_t cpu_id = IS_ENABLED(CONFIG_SMP) ? arch_curr_cpu()->id : 0U;
start = (uintptr_t)K_KERNEL_STACK_BUFFER(z_interrupt_stacks[cpu_id]);
end = start + CONFIG_ISR_STACK_SIZE;
#ifdef CONFIG_USERSPACE
} else if (((esf->mstatus & MSTATUS_MPP) == PRV_U) &&
((_current->base.user_options & K_USER) != 0)) {
/* See: zephyr/include/zephyr/arch/riscv/arch.h */
if (IS_ENABLED(CONFIG_PMP_POWER_OF_TWO_ALIGNMENT)) {
start = _current->arch.priv_stack_start - CONFIG_PRIVILEGED_STACK_SIZE;
end = _current->arch.priv_stack_start;
} else {
start = _current->stack_info.start - CONFIG_PRIVILEGED_STACK_SIZE;
end = _current->stack_info.start;
}
#endif /* CONFIG_USERSPACE */
} else {
start = _current->stack_info.start;
end = Z_STACK_PTR_ALIGN(_current->stack_info.start + _current->stack_info.size);
}
return (addr >= start) && (addr < end);
#else
ARG_UNUSED(addr);
ARG_UNUSED(thread);
/* Return false as we can't check if the addr is in the thread stack without stack info */
return false;
#endif
}
#ifdef CONFIG_USERSPACE
static inline bool in_user_thread_stack_bound(uintptr_t addr, const struct k_thread *const thread)
{
uintptr_t start, end;
/* See: zephyr/include/zephyr/arch/riscv/arch.h */
if (IS_ENABLED(CONFIG_PMP_POWER_OF_TWO_ALIGNMENT)) {
start = thread->arch.priv_stack_start - CONFIG_PRIVILEGED_STACK_SIZE;
end = thread->arch.priv_stack_start;
} else {
start = thread->stack_info.start - CONFIG_PRIVILEGED_STACK_SIZE;
end = thread->stack_info.start;
}
return (addr >= start) && (addr < end);
}
#endif /* CONFIG_USERSPACE */
static bool in_stack_bound(uintptr_t addr, const struct k_thread *const thread,
const struct arch_esf *esf)
{
ARG_UNUSED(esf);
if (!IS_ALIGNED(addr, sizeof(uintptr_t))) {
return false;
}
#ifdef CONFIG_USERSPACE
if ((thread->base.user_options & K_USER) != 0) {
return in_user_thread_stack_bound(addr, thread);
}
#endif /* CONFIG_USERSPACE */
return in_kernel_thread_stack_bound(addr, thread);
}
static bool in_fatal_stack_bound(uintptr_t addr, const struct k_thread *const thread,
const struct arch_esf *esf)
{
if (!IS_ALIGNED(addr, sizeof(uintptr_t))) {
return false;
}
if ((thread == NULL) || arch_is_in_isr()) {
/* We were servicing an interrupt */
uint8_t cpu_id = IS_ENABLED(CONFIG_SMP) ? arch_curr_cpu()->id : 0U;
return in_irq_stack_bound(addr, cpu_id);
}
return in_stack_bound(addr, thread, esf);
return true;
#endif /* CONFIG_THREAD_STACK_INFO */
}
static inline bool in_text_region(uintptr_t addr)
@ -111,134 +85,62 @@ static inline bool in_text_region(uintptr_t addr)
return (addr >= (uintptr_t)&__text_region_start) && (addr < (uintptr_t)&__text_region_end);
}
#ifdef CONFIG_FRAME_POINTER
static void walk_stackframe(stack_trace_callback_fn cb, void *cookie, const struct k_thread *thread,
const struct arch_esf *esf, stack_verify_fn vrfy,
const _callee_saved_t *csf)
#ifdef CONFIG_RISCV_ENABLE_FRAME_POINTER
void z_riscv_unwind_stack(const z_arch_esf_t *esf)
{
uintptr_t fp, last_fp = 0;
uintptr_t fp = esf->s0;
uintptr_t ra;
struct stackframe *frame;
if (esf != NULL) {
/* Unwind the provided exception stack frame */
fp = esf->s0;
ra = esf->mepc;
} else if ((csf == NULL) || (csf == &_current->callee_saved)) {
/* Unwind current thread (default case when nothing is provided ) */
fp = (uintptr_t)__builtin_frame_address(0);
ra = (uintptr_t)walk_stackframe;
} else {
/* Unwind the provided thread */
fp = csf->s0;
ra = csf->ra;
}
LOG_ERR("call trace:");
for (int i = 0; (i < MAX_STACK_FRAMES) && vrfy(fp, thread, esf) && (fp > last_fp);) {
if (in_text_region(ra)) {
if (!cb(cookie, ra)) {
break;
}
/*
* Increment the iterator only if `ra` is within the text region to get the
* most out of it
*/
i++;
}
last_fp = fp;
/* Unwind to the previous frame */
for (int i = 0; (i < MAX_STACK_FRAMES) && (fp != 0U) && in_stack_bound(fp, esf);) {
frame = (struct stackframe *)fp - 1;
ra = frame->ra;
fp = frame->fp;
}
}
#else /* !CONFIG_FRAME_POINTER */
register uintptr_t current_stack_pointer __asm__("sp");
static void walk_stackframe(stack_trace_callback_fn cb, void *cookie, const struct k_thread *thread,
const struct arch_esf *esf, stack_verify_fn vrfy,
const _callee_saved_t *csf)
{
uintptr_t sp;
uintptr_t ra;
uintptr_t *ksp, last_ksp = 0;
if (esf != NULL) {
/* Unwind the provided exception stack frame */
sp = z_riscv_get_sp_before_exc(esf);
ra = esf->mepc;
} else if ((csf == NULL) || (csf == &_current->callee_saved)) {
/* Unwind current thread (default case when nothing is provided ) */
sp = current_stack_pointer;
ra = (uintptr_t)walk_stackframe;
} else {
/* Unwind the provided thread */
sp = csf->sp;
ra = csf->ra;
}
ksp = (uintptr_t *)sp;
for (int i = 0; (i < MAX_STACK_FRAMES) && vrfy((uintptr_t)ksp, thread, esf) &&
((uintptr_t)ksp > last_ksp);) {
if (in_text_region(ra)) {
if (!cb(cookie, ra)) {
break;
}
#ifdef CONFIG_EXCEPTION_STACK_TRACE_SYMTAB
uint32_t offset = 0;
const char *name = symtab_find_symbol_name(ra, &offset);
#endif
LOG_STACK_TRACE(i, fp, ra, name, offset);
/*
* Increment the iterator only if `ra` is within the text region to get the
* most out of it
*/
i++;
}
last_ksp = (uintptr_t)ksp;
/* Unwind to the previous frame */
ra = ((struct arch_esf *)ksp++)->ra;
}
}
#endif /* CONFIG_FRAME_POINTER */
void arch_stack_walk(stack_trace_callback_fn callback_fn, void *cookie,
const struct k_thread *thread, const struct arch_esf *esf)
{
if (thread == NULL) {
/* In case `thread` is NULL, default that to `_current` and try to unwind */
thread = _current;
fp = frame->fp;
}
walk_stackframe(callback_fn, cookie, thread, esf, in_stack_bound, &thread->callee_saved);
}
#if __riscv_xlen == 32
#define PR_REG "%08" PRIxPTR
#elif __riscv_xlen == 64
#define PR_REG "%016" PRIxPTR
#endif
#ifdef CONFIG_EXCEPTION_STACK_TRACE_SYMTAB
#define LOG_STACK_TRACE(idx, ra, name, offset) \
LOG_ERR(" %2d: ra: " PR_REG " [%s+0x%x]", idx, ra, name, offset)
#else
#define LOG_STACK_TRACE(idx, ra, name, offset) LOG_ERR(" %2d: ra: " PR_REG, idx, ra)
#endif /* CONFIG_EXCEPTION_STACK_TRACE_SYMTAB */
static bool print_trace_address(void *arg, unsigned long ra)
{
int *i = arg;
#ifdef CONFIG_EXCEPTION_STACK_TRACE_SYMTAB
uint32_t offset = 0;
const char *name = symtab_find_symbol_name(ra, &offset);
#endif
LOG_STACK_TRACE((*i)++, ra, name, offset);
return true;
}
void z_riscv_unwind_stack(const struct arch_esf *esf, const _callee_saved_t *csf)
{
int i = 0;
LOG_ERR("call trace:");
walk_stackframe(print_trace_address, &i, _current, esf, in_fatal_stack_bound, csf);
LOG_ERR("");
}
#else /* !CONFIG_RISCV_ENABLE_FRAME_POINTER */
void z_riscv_unwind_stack(const z_arch_esf_t *esf)
{
uintptr_t sp = z_riscv_get_sp_before_exc(esf);
uintptr_t ra;
uintptr_t *ksp = (uintptr_t *)sp;
LOG_ERR("call trace:");
for (int i = 0; (i < MAX_STACK_FRAMES) && ((uintptr_t)ksp != 0U) &&
in_stack_bound((uintptr_t)ksp, esf);
ksp++) {
ra = *ksp;
if (in_text_region(ra)) {
#ifdef CONFIG_EXCEPTION_STACK_TRACE_SYMTAB
uint32_t offset = 0;
const char *name = symtab_find_symbol_name(ra, &offset);
#endif
LOG_STACK_TRACE(i, (uintptr_t)ksp, ra, name, offset);
/*
* Increment the iterator only if `ra` is within the text region to get the
* most out of it
*/
i++;
}
}
LOG_ERR("");
}
#endif /* CONFIG_RISCV_ENABLE_FRAME_POINTER */

View file

@ -23,15 +23,15 @@ void arch_new_thread(struct k_thread *thread, k_thread_stack_t *stack,
void *p1, void *p2, void *p3)
{
extern void z_riscv_thread_start(void);
struct arch_esf *stack_init;
struct __esf *stack_init;
#ifdef CONFIG_RISCV_SOC_CONTEXT_SAVE
const struct soc_esf soc_esf_init = {SOC_ESF_INIT};
#endif
/* Initial stack frame for thread */
stack_init = (struct arch_esf *)Z_STACK_PTR_ALIGN(
Z_STACK_PTR_TO_FRAME(struct arch_esf, stack_ptr)
stack_init = (struct __esf *)Z_STACK_PTR_ALIGN(
Z_STACK_PTR_TO_FRAME(struct __esf, stack_ptr)
);
/* Setup the initial stack frame */
@ -212,8 +212,6 @@ FUNC_NORETURN void z_riscv_switch_to_main_no_multithreading(k_thread_entry_t mai
main_stack = (K_THREAD_STACK_BUFFER(z_main_stack) +
K_THREAD_STACK_SIZEOF(z_main_stack));
irq_unlock(MSTATUS_IEN);
__asm__ volatile (
"mv sp, %0; jalr ra, %1, 0"
:

View file

@ -71,9 +71,9 @@ arch_switch(void *switch_to, void **switched_from)
/* Thin wrapper around z_riscv_fatal_error_csf */
FUNC_NORETURN void z_riscv_fatal_error(unsigned int reason,
const struct arch_esf *esf);
const z_arch_esf_t *esf);
FUNC_NORETURN void z_riscv_fatal_error_csf(unsigned int reason, const struct arch_esf *esf,
FUNC_NORETURN void z_riscv_fatal_error_csf(unsigned int reason, const z_arch_esf_t *esf,
const _callee_saved_t *csf);
static inline bool arch_is_in_isr(void)

View file

@ -122,7 +122,7 @@ static const struct {
{ .tt = 0x0A, .desc = "tag_overflow", },
};
static void print_trap_type(const struct arch_esf *esf)
static void print_trap_type(const z_arch_esf_t *esf)
{
const int tt = (esf->tbr & TBR_TT) >> TBR_TT_BIT;
const char *desc = "unknown";
@ -142,7 +142,7 @@ static void print_trap_type(const struct arch_esf *esf)
LOG_ERR("tt = 0x%02X, %s", tt, desc);
}
static void print_integer_registers(const struct arch_esf *esf)
static void print_integer_registers(const z_arch_esf_t *esf)
{
const struct savearea *flushed = (struct savearea *) esf->out[6];
@ -159,7 +159,7 @@ static void print_integer_registers(const struct arch_esf *esf)
}
}
static void print_special_registers(const struct arch_esf *esf)
static void print_special_registers(const z_arch_esf_t *esf)
{
LOG_ERR(
"psr: %08x wim: %08x tbr: %08x y: %08x",
@ -168,7 +168,7 @@ static void print_special_registers(const struct arch_esf *esf)
LOG_ERR(" pc: %08x npc: %08x", esf->pc, esf->npc);
}
static void print_backtrace(const struct arch_esf *esf)
static void print_backtrace(const z_arch_esf_t *esf)
{
const int MAX_LOGLINES = 40;
const struct savearea *s = (struct savearea *) esf->out[6];
@ -190,7 +190,7 @@ static void print_backtrace(const struct arch_esf *esf)
}
}
static void print_all(const struct arch_esf *esf)
static void print_all(const z_arch_esf_t *esf)
{
LOG_ERR("");
print_trap_type(esf);
@ -205,7 +205,7 @@ static void print_all(const struct arch_esf *esf)
#endif /* CONFIG_EXCEPTION_DEBUG */
FUNC_NORETURN void z_sparc_fatal_error(unsigned int reason,
const struct arch_esf *esf)
const z_arch_esf_t *esf)
{
#if CONFIG_EXCEPTION_DEBUG
if (esf != NULL) {

View file

@ -72,7 +72,7 @@ SECTION_FUNC(TEXT, __sparc_trap_except_reason)
mov %l5, %g3
/* Allocate an ABI stack frame and exception stack frame */
sub %fp, 96 + __struct_arch_esf_SIZEOF, %sp
sub %fp, 96 + __z_arch_esf_t_SIZEOF, %sp
/*
* %fp: %sp of interrupted task
* %sp: %sp of interrupted task - ABI_frame - esf
@ -81,19 +81,19 @@ SECTION_FUNC(TEXT, __sparc_trap_except_reason)
mov %l7, %o0
/* Fill in the content of the exception stack frame */
#if defined(CONFIG_EXTRA_EXCEPTION_INFO)
std %i0, [%sp + 96 + __struct_arch_esf_out_OFFSET + 0x00]
std %i2, [%sp + 96 + __struct_arch_esf_out_OFFSET + 0x08]
std %i4, [%sp + 96 + __struct_arch_esf_out_OFFSET + 0x10]
std %i6, [%sp + 96 + __struct_arch_esf_out_OFFSET + 0x18]
std %g0, [%sp + 96 + __struct_arch_esf_global_OFFSET + 0x00]
std %g2, [%sp + 96 + __struct_arch_esf_global_OFFSET + 0x08]
std %g4, [%sp + 96 + __struct_arch_esf_global_OFFSET + 0x10]
std %g6, [%sp + 96 + __struct_arch_esf_global_OFFSET + 0x18]
std %i0, [%sp + 96 + __z_arch_esf_t_out_OFFSET + 0x00]
std %i2, [%sp + 96 + __z_arch_esf_t_out_OFFSET + 0x08]
std %i4, [%sp + 96 + __z_arch_esf_t_out_OFFSET + 0x10]
std %i6, [%sp + 96 + __z_arch_esf_t_out_OFFSET + 0x18]
std %g0, [%sp + 96 + __z_arch_esf_t_global_OFFSET + 0x00]
std %g2, [%sp + 96 + __z_arch_esf_t_global_OFFSET + 0x08]
std %g4, [%sp + 96 + __z_arch_esf_t_global_OFFSET + 0x10]
std %g6, [%sp + 96 + __z_arch_esf_t_global_OFFSET + 0x18]
#endif
std %l0, [%sp + 96 + __struct_arch_esf_psr_OFFSET] /* psr pc */
std %l2, [%sp + 96 + __struct_arch_esf_npc_OFFSET] /* npc wim */
std %l0, [%sp + 96 + __z_arch_esf_t_psr_OFFSET] /* psr pc */
std %l2, [%sp + 96 + __z_arch_esf_t_npc_OFFSET] /* npc wim */
rd %y, %l7
std %l6, [%sp + 96 + __struct_arch_esf_tbr_OFFSET] /* tbr y */
std %l6, [%sp + 96 + __z_arch_esf_t_tbr_OFFSET] /* tbr y */
/* Enable traps, raise PIL to mask all maskable interrupts. */
or %l0, PSR_PIL, %o2

View file

@ -31,11 +31,11 @@ GEN_OFFSET_SYM(_callee_saved_t, i6);
GEN_OFFSET_SYM(_callee_saved_t, o6);
/* esf member offsets */
GEN_OFFSET_STRUCT(arch_esf, out);
GEN_OFFSET_STRUCT(arch_esf, global);
GEN_OFFSET_STRUCT(arch_esf, npc);
GEN_OFFSET_STRUCT(arch_esf, psr);
GEN_OFFSET_STRUCT(arch_esf, tbr);
GEN_ABSOLUTE_SYM(__struct_arch_esf_SIZEOF, sizeof(struct arch_esf));
GEN_OFFSET_SYM(z_arch_esf_t, out);
GEN_OFFSET_SYM(z_arch_esf_t, global);
GEN_OFFSET_SYM(z_arch_esf_t, npc);
GEN_OFFSET_SYM(z_arch_esf_t, psr);
GEN_OFFSET_SYM(z_arch_esf_t, tbr);
GEN_ABSOLUTE_SYM(__z_arch_esf_t_SIZEOF, STACK_ROUND_UP(sizeof(z_arch_esf_t)));
GEN_ABS_SYM_END

View file

@ -43,7 +43,7 @@ static inline void arch_switch(void *switch_to, void **switched_from)
}
FUNC_NORETURN void z_sparc_fatal_error(unsigned int reason,
const struct arch_esf *esf);
const z_arch_esf_t *esf);
static inline bool arch_is_in_isr(void)
{

View file

@ -166,9 +166,7 @@ endmenu
config X86_EXCEPTION_STACK_TRACE
bool
default y
select DEBUG_INFO
select THREAD_STACK_INFO
depends on !OMIT_FRAME_POINTER
depends on EXCEPTION_STACK_TRACE
help
Internal config to enable runtime stack traces on fatal exceptions.

Some files were not shown because too many files have changed in this diff Show more