xtensa: dc233c: force invalidating TLBs during page table swap
QEMU MMU tracing showed that there might be something wrong with its Xtensa MMU implementation, which result in access violation when running samples/userspace/hello_world_user. Here is the MMU trace from QEMU from failed runs: get_pte: autorefill(00109020): PTE va = 20000424, pa = 0010c424 get_physical_addr_mmu: autorefill(00109020): 00109000 -> 00109006 xtensa_cpu_tlb_fill(00109020, 1, 0) -> 00109020, ret = 0 xtensa_cpu_tlb_fill(00109028, 1, 0) -> 00109028, ret = 0 xtensa_cpu_tlb_fill(00109014, 0, 2) -> 00103050, ret = 26 The place where it fails is during reading from 0x109014. From the trace above, the auto-refill maps 0x109000 correctly with ring 0 and RW access with WB cache (which should be correct the first time under kernel mode). The page 0x109000 is the libc partition which needs to be accessible from user thread. However, when accessing that page, the returned physical address became 0x103050 (and resulting in load/store access violation). We always identity map memory pages so it should never return a different physical address. After forcing TLB invalidation during page table swaps, the MMU trace is: get_pte: autorefill(00109020): PTE va = 20000424, pa = 0010c424 get_physical_addr_mmu: autorefill(00109020): 00109000 -> 00109006 xtensa_cpu_tlb_fill(00109020, 1, 0) -> 00109020, ret = 0 get_pte: autorefill(00109028): PTE va = 21000424, pa = 0010e424 get_physical_addr_mmu: autorefill(00109028): 00109000 -> 00109022 xtensa_cpu_tlb_fill(00109028, 1, 0) -> 00109028, ret = 0 get_pte: autorefill(00109014): PTE va = 21000424, pa = 0010e424 get_physical_addr_mmu: autorefill(00109014): 00109000 -> 00109022 xtensa_cpu_tlb_fill(00109014, 0, 2) -> 00109014, ret = 0 xtensa_cpu_tlb_fill(00109020, 0, 0) -> 00109020, ret = 0 Here, when the same page is accessed, it got the correct PTE entry, which is ring 2 with RW access mode (but no cache). Actually accessing the variable via virtual address returns the correct physical address: 0x109014. So workaround that by forcing TLB invalidation during page swap. Fixes #66029 Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This commit is contained in:
parent
fa25c0b0b8
commit
debb9f6352
1 changed files with 1 additions and 0 deletions
|
@ -12,3 +12,4 @@ config SOC_XTENSA_DC233C
|
|||
select CPU_HAS_MMU
|
||||
select ARCH_HAS_RESERVED_PAGE_FRAMES if XTENSA_MMU
|
||||
select ARCH_HAS_USERSPACE if XTENSA_MMU
|
||||
select XTENSA_INVALIDATE_MEM_DOMAIN_TLB_ON_SWAP if XTENSA_MMU
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue