zephyr/arch/arm64/core/prep_c.c

68 lines
1.2 KiB
C
Raw Normal View History

/*
* Copyright (c) 2019 Carlo Caione <ccaione@baylibre.com>
*
* SPDX-License-Identifier: Apache-2.0
*/
/**
* @file
* @brief Full C support initialization
*
* Initialization of full C support: zero the .bss and call z_cstart().
*
* Stack is available in this module, but not the global data/bss until their
* initialization is performed.
*/
#include <kernel_internal.h>
aarch64: Fix alignment fault on z_bss_zero() Using newlibc with AArch64 is causing an alignement fault in z_bss_zero() when the code is run on real hardware (on QEMU the problem is not reproducible). The main cause is that the memset() function exported by newlibc is using 'DC ZVA' to zero out memory. While this is often a nice optimization, this is causing the issue on AArch64 because memset() is being used before the MMU is enabled, and when the MMU is disabled all data accesses will be treated as Device_nGnRnE. This is a problem because quoting from the ARM reference manual: "If the memory region being zeroed is any type of Device memory, then DC ZVA generates an Alignment fault which is prioritized in the same way as other alignment faults that are determined by the memory type". newlibc tries to be a bit smart about this reading the DCZID_EL0 register before deciding whether using 'DC ZVA' or not. While this is a good idea for code running in EL0, currently the Zephyr kernel is running in EL1. This means that the value of the DCZID_EL0 register is actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is unconditionally used in the newlibc memset() implementation. The "standard" solution for this case is usually to use a different memset routine to be specifically used for two cases: (1) against IO memory or (2) against normal memory but with MMU disabled (which means all memory is considered device memory for data accesses). To fix this issue in Zephyr we avoid calling memset() when clearing the bss, and instead we use a simple loop to zero the memory region. Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-14 21:28:55 +01:00
#include <linker/linker-defs.h>
extern FUNC_NORETURN void z_cstart(void);
aarch64: Fix alignment fault on z_bss_zero() Using newlibc with AArch64 is causing an alignement fault in z_bss_zero() when the code is run on real hardware (on QEMU the problem is not reproducible). The main cause is that the memset() function exported by newlibc is using 'DC ZVA' to zero out memory. While this is often a nice optimization, this is causing the issue on AArch64 because memset() is being used before the MMU is enabled, and when the MMU is disabled all data accesses will be treated as Device_nGnRnE. This is a problem because quoting from the ARM reference manual: "If the memory region being zeroed is any type of Device memory, then DC ZVA generates an Alignment fault which is prioritized in the same way as other alignment faults that are determined by the memory type". newlibc tries to be a bit smart about this reading the DCZID_EL0 register before deciding whether using 'DC ZVA' or not. While this is a good idea for code running in EL0, currently the Zephyr kernel is running in EL1. This means that the value of the DCZID_EL0 register is actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is unconditionally used in the newlibc memset() implementation. The "standard" solution for this case is usually to use a different memset routine to be specifically used for two cases: (1) against IO memory or (2) against normal memory but with MMU disabled (which means all memory is considered device memory for data accesses). To fix this issue in Zephyr we avoid calling memset() when clearing the bss, and instead we use a simple loop to zero the memory region. Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-14 21:28:55 +01:00
#ifdef CONFIG_ARM_MMU
extern void z_arm64_mmu_init(void);
#else
static inline void z_arm64_mmu_init(void) { }
#endif
aarch64: Fix alignment fault on z_bss_zero() Using newlibc with AArch64 is causing an alignement fault in z_bss_zero() when the code is run on real hardware (on QEMU the problem is not reproducible). The main cause is that the memset() function exported by newlibc is using 'DC ZVA' to zero out memory. While this is often a nice optimization, this is causing the issue on AArch64 because memset() is being used before the MMU is enabled, and when the MMU is disabled all data accesses will be treated as Device_nGnRnE. This is a problem because quoting from the ARM reference manual: "If the memory region being zeroed is any type of Device memory, then DC ZVA generates an Alignment fault which is prioritized in the same way as other alignment faults that are determined by the memory type". newlibc tries to be a bit smart about this reading the DCZID_EL0 register before deciding whether using 'DC ZVA' or not. While this is a good idea for code running in EL0, currently the Zephyr kernel is running in EL1. This means that the value of the DCZID_EL0 register is actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is unconditionally used in the newlibc memset() implementation. The "standard" solution for this case is usually to use a different memset routine to be specifically used for two cases: (1) against IO memory or (2) against normal memory but with MMU disabled (which means all memory is considered device memory for data accesses). To fix this issue in Zephyr we avoid calling memset() when clearing the bss, and instead we use a simple loop to zero the memory region. Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-14 21:28:55 +01:00
static inline void z_arm64_bss_zero(void)
{
uint64_t *p = (uint64_t *)__bss_start;
uint64_t *end = (uint64_t *)__bss_end;
while (p < end) {
*p++ = 0U;
aarch64: Fix alignment fault on z_bss_zero() Using newlibc with AArch64 is causing an alignement fault in z_bss_zero() when the code is run on real hardware (on QEMU the problem is not reproducible). The main cause is that the memset() function exported by newlibc is using 'DC ZVA' to zero out memory. While this is often a nice optimization, this is causing the issue on AArch64 because memset() is being used before the MMU is enabled, and when the MMU is disabled all data accesses will be treated as Device_nGnRnE. This is a problem because quoting from the ARM reference manual: "If the memory region being zeroed is any type of Device memory, then DC ZVA generates an Alignment fault which is prioritized in the same way as other alignment faults that are determined by the memory type". newlibc tries to be a bit smart about this reading the DCZID_EL0 register before deciding whether using 'DC ZVA' or not. While this is a good idea for code running in EL0, currently the Zephyr kernel is running in EL1. This means that the value of the DCZID_EL0 register is actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is unconditionally used in the newlibc memset() implementation. The "standard" solution for this case is usually to use a different memset routine to be specifically used for two cases: (1) against IO memory or (2) against normal memory but with MMU disabled (which means all memory is considered device memory for data accesses). To fix this issue in Zephyr we avoid calling memset() when clearing the bss, and instead we use a simple loop to zero the memory region. Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-14 21:28:55 +01:00
}
}
/**
*
* @brief Prepare to and run C code
*
* This routine prepares for the execution of and runs C code.
*
* @return N/A
*/
void z_arm64_prep_c(void)
{
aarch64: Fix alignment fault on z_bss_zero() Using newlibc with AArch64 is causing an alignement fault in z_bss_zero() when the code is run on real hardware (on QEMU the problem is not reproducible). The main cause is that the memset() function exported by newlibc is using 'DC ZVA' to zero out memory. While this is often a nice optimization, this is causing the issue on AArch64 because memset() is being used before the MMU is enabled, and when the MMU is disabled all data accesses will be treated as Device_nGnRnE. This is a problem because quoting from the ARM reference manual: "If the memory region being zeroed is any type of Device memory, then DC ZVA generates an Alignment fault which is prioritized in the same way as other alignment faults that are determined by the memory type". newlibc tries to be a bit smart about this reading the DCZID_EL0 register before deciding whether using 'DC ZVA' or not. While this is a good idea for code running in EL0, currently the Zephyr kernel is running in EL1. This means that the value of the DCZID_EL0 register is actually retrieved from the HCR_EL2.TDZ bit, that is always 0 because EL2 is not currently supported / enabled. So the 'DC ZVA' instruction is unconditionally used in the newlibc memset() implementation. The "standard" solution for this case is usually to use a different memset routine to be specifically used for two cases: (1) against IO memory or (2) against normal memory but with MMU disabled (which means all memory is considered device memory for data accesses). To fix this issue in Zephyr we avoid calling memset() when clearing the bss, and instead we use a simple loop to zero the memory region. Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-14 21:28:55 +01:00
z_arm64_bss_zero();
#ifdef CONFIG_XIP
z_data_copy();
#endif
z_arm64_mmu_init();
z_arm64_interrupt_init();
z_cstart();
CODE_UNREACHABLE;
}
#if CONFIG_MP_NUM_CPUS > 1
extern FUNC_NORETURN void z_arm64_secondary_start(void);
void z_arm64_secondary_prep_c(void)
{
z_arm64_secondary_start();
CODE_UNREACHABLE;
}
#endif