mmu: fix ARM64 compilation by removing z_mapped_size usage
The linker script defines `z_mapped_size` as follows: ``` z_mapped_size = z_mapped_end - z_mapped_start; ``` This is done with the belief that precomputed values at link time will make the code smaller and faster. On Aarch64, symbol values are relocated and loaded relative to the PC as those are normally meant to be memory addresses. Now if you have e.g. `CONFIG_SRAM_BASE_ADDRESS=0x2000000000` then `z_mapped_size` might still have a reasonable value, say 0x59334. But, when interpreted as an address, that's very very far from the PC whose value is in the neighborhood of 0x2000000000. That overflows the 4GB relocation range: ``` kernel/libkernel.a(mmu.c.obj): in function `z_mem_manage_init': kernel/mmu.c:527:(.text.z_mem_manage_init+0x1c): relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21 ``` The solution is to define `Z_KERNEL_VIRT_SIZE` in terms of `z_mapped_end - z_mapped_start` at the source code level. Given this is used within loops that already start with `z_mapped_start` anyway, the compiler is smart enough to combine the two occurrences and dispense with a size counter, making the code effectively slightly better for all while avoiding the Aarch64 relocation overflow: ``` text data bss dec hex filename 1216 8 294936 296160 484e0 mmu.c.obj.arm64.before 1212 8 294936 296156 484dc mmu.c.obj.arm64.after 1110 8 9244 10362 287a mmu.c.obj.x86-64.before 1106 8 9244 10358 2876 mmu.c.obj.x86-64.after ``` Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This commit is contained in:
parent
46d6a73f91
commit
f9461d1ac4
3 changed files with 1 additions and 3 deletions
|
@ -186,7 +186,6 @@ extern char __data_ram_end[];
|
|||
/* Virtual addresses of page-aligned kernel image mapped into RAM at boot */
|
||||
extern char z_mapped_start[];
|
||||
extern char z_mapped_end[];
|
||||
extern char z_mapped_size[];
|
||||
#endif /* CONFIG_MMU */
|
||||
|
||||
/* Includes text and rodata */
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue