x86: fix pte corruption when setting large regions

We need a copy of the flags field for ever PTE we are
updating, we can't just keep OR-ing in the address
field.

Fixes issues seen when setting flags for memory regions
larger than a page.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This commit is contained in:
Andrew Boie 2019-03-02 11:40:23 -08:00 committed by Anas Nashif
commit 1459bed346

View file

@ -195,6 +195,7 @@ void _x86_mmu_set_flags(struct x86_mmu_pdpt *pdpt, void *ptr,
}
while (size != 0) {
x86_page_entry_data_t cur_flags = flags;
/* TODO we're not generating 2MB entries at the moment */
__ASSERT(X86_MMU_GET_PDE(pdpt, addr)->ps != 1, "2MB PDE found");
@ -206,10 +207,10 @@ void _x86_mmu_set_flags(struct x86_mmu_pdpt *pdpt, void *ptr,
*/
if (((mask & MMU_PTE_P_MASK) != 0) &&
((flags & MMU_ENTRY_PRESENT) != 0)) {
flags |= addr;
cur_flags |= addr;
}
pte->value = (pte->value & ~mask) | flags;
pte->value = (pte->value & ~mask) | cur_flags;
tlb_flush_page((void *)addr);
size -= MMU_PAGE_SIZE;