2017-07-17 12:07:56 +05:30
|
|
|
/*
|
|
|
|
* Copyright (c) 2011-2014 Wind River Systems, Inc.
|
|
|
|
* Copyright (c) 2017 Intel Corporation
|
|
|
|
*
|
|
|
|
* SPDX-License-Identifier: Apache-2.0
|
|
|
|
*/
|
2017-08-01 15:22:06 -07:00
|
|
|
#include <kernel.h>
|
|
|
|
#include <mmustructs.h>
|
|
|
|
#include <linker/linker-defs.h>
|
2018-05-09 16:36:44 -07:00
|
|
|
#include <kernel_internal.h>
|
|
|
|
#include <init.h>
|
2017-07-17 12:07:56 +05:30
|
|
|
|
2017-08-01 15:22:06 -07:00
|
|
|
/* Common regions for all x86 processors.
|
|
|
|
* Peripheral I/O ranges configured at the SOC level
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Mark text and rodata as read-only.
|
|
|
|
* Userspace may read all text and rodata.
|
|
|
|
*/
|
2019-02-11 23:01:32 -08:00
|
|
|
MMU_BOOT_REGION((u32_t)&_image_text_start, (u32_t)&_image_text_size,
|
2017-08-01 15:22:06 -07:00
|
|
|
MMU_ENTRY_READ | MMU_ENTRY_USER);
|
|
|
|
|
2019-02-11 23:01:32 -08:00
|
|
|
MMU_BOOT_REGION((u32_t)&_image_rodata_start, (u32_t)&_image_rodata_size,
|
|
|
|
MMU_ENTRY_READ | MMU_ENTRY_USER |
|
|
|
|
MMU_ENTRY_EXECUTE_DISABLE);
|
|
|
|
|
2019-02-22 16:08:44 -08:00
|
|
|
#ifdef CONFIG_USERSPACE
|
userspace: compartmentalized app memory organization
Summary: revised attempt at addressing issue 6290. The
following provides an alternative to using
CONFIG_APPLICATION_MEMORY by compartmentalizing data into
Memory Domains. Dependent on MPU limitations, supports
compartmentalized Memory Domains for 1...N logical
applications. This is considered an initial attempt at
designing flexible compartmentalized Memory Domains for
multiple logical applications and, with the provided python
script and edited CMakeLists.txt, provides support for power
of 2 aligned MPU architectures.
Overview: The current patch uses qualifiers to group data into
subsections. The qualifier usage allows for dynamic subsection
creation and affords the developer a large amount of flexibility
in the grouping, naming, and size of the resulting partitions and
domains that are built on these subsections. By additional macro
calls, functions are created that help calculate the size,
address, and permissions for the subsections and enable the
developer to control application data in specified partitions and
memory domains.
Background: Initial attempts focused on creating a single
section in the linker script that then contained internally
grouped variables/data to allow MPU/MMU alignment and protection.
This did not provide additional functionality beyond
CONFIG_APPLICATION_MEMORY as we were unable to reliably group
data or determine their grouping via exported linker symbols.
Thus, the resulting decision was made to dynamically create
subsections using the current qualifier method. An attempt to
group the data by object file was tested, but found that this
broke applications such as ztest where two object files are
created: ztest and main. This also creates an issue of grouping
the two object files together in the same memory domain while
also allowing for compartmenting other data among threads.
Because it is not possible to know a) the name of the partition
and thus the symbol in the linker, b) the size of all the data
in the subsection, nor c) the overall number of partitions
created by the developer, it was not feasible to align the
subsections at compile time without using dynamically generated
linker script for MPU architectures requiring power of 2
alignment.
In order to provide support for MPU architectures that require a
power of 2 alignment, a python script is run at build prior to
when linker_priv_stacks.cmd is generated. This script scans the
built object files for all possible partitions and the names given
to them. It then generates a linker file (app_smem.ld) that is
included in the main linker.ld file. This app_smem.ld allows the
compiler and linker to then create each subsection and align to
the next power of 2.
Usage:
- Requires: app_memory/app_memdomain.h .
- _app_dmem(id) marks a variable to be placed into a data
section for memory partition id.
- _app_bmem(id) marks a variable to be placed into a bss
section for memory partition id.
- These are seen in the linker.map as "data_smem_id" and
"data_smem_idb".
- To create a k_mem_partition, call the macro
app_mem_partition(part0) where "part0" is the name then used to
refer to that partition. This macro only creates a function and
necessary data structures for the later "initialization".
- To create a memory domain for the partition, the macro
app_mem_domain(dom0) is called where "dom0" is the name then
used for the memory domain.
- To initialize the partition (effectively adding the partition
to a linked list), init_part_part0() is called. This is followed
by init_app_memory(), which walks all partitions in the linked
list and calculates the sizes for each partition.
- Once the partition is initialized, the domain can be
initialized with init_domain_dom0(part0) which initializes the
domain with partition part0.
- After the domain has been initialized, the current thread
can be added using add_thread_dom0(k_current_get()).
- The code used in ztests ans kernel/init has been added under
a conditional #ifdef to isolate the code from other tests.
The userspace test CMakeLists.txt file has commands to insert
the CONFIG_APP_SHARED_MEM definition into the required build
targets.
Example:
/* create partition at top of file outside functions */
app_mem_partition(part0);
/* create domain */
app_mem_domain(dom0);
_app_dmem(dom0) int var1;
_app_bmem(dom0) static volatile int var2;
int main()
{
init_part_part0();
init_app_memory();
init_domain_dom0(part0);
add_thread_dom0(k_current_get());
...
}
- If multiple partitions are being created, a variadic
preprocessor macro can be used as provided in
app_macro_support.h:
FOR_EACH(app_mem_partition, part0, part1, part2);
or, for multiple domains, similarly:
FOR_EACH(app_mem_domain, dom0, dom1);
Similarly, the init_part_* can also be used in the macro:
FOR_EACH(init_part, part0, part1, part2);
Testing:
- This has been successfully tested on qemu_x86 and the
ARM frdm_k64f board. It compiles and builds power of 2
aligned subsections for the linker script on the 96b_carbon
boards. These power of 2 alignments have been checked by
hand and are viewable in the zephyr.map file that is
produced during build. However, due to a shortage of
available MPU regions on the 96b_carbon board, we are unable
to test this.
- When run on the 96b_carbon board, the test suite will
enter execution, but each individaul test will fail due to
an MPU FAULT. This is expected as the required number of
MPU regions exceeds the number allowed due to the static
allocation. As the MPU driver does not detect this issue,
the fault occurs because the data being accessed has been
placed outside the active MPU region.
- This now compiles successfully for the ARC boards
em_starterkit_em7d and em_starterkit_em7d_v22. However,
as we lack ARC hardware to run this build on, we are unable
to test this build.
Current known issues:
1) While the script and edited CMakeLists.txt creates the
ability to align to the next power of 2, this does not
address the shortage of available MPU regions on certain
devices (e.g. 96b_carbon). In testing the APB and PPB
regions were commented out.
2) checkpatch.pl lists several issues regarding the
following:
a) Complex macros. The FOR_EACH macros as defined in
app_macro_support.h are listed as complex macros needing
parentheses. Adding parentheses breaks their
functionality, and we have otherwise been unable to
resolve the reported error.
b) __aligned() preferred. The _app_dmem_pad() and
_app_bmem_pad() macros give warnings that __aligned()
is preferred. Prior iterations had this implementation,
which resulted in errors due to "complex macros".
c) Trailing semicolon. The macro init_part(name) has
a trailing semicolon as the semicolon is needed for the
inlined macro call that is generated when this macro
expands.
Update: updated to alternative CONFIG_APPLCATION_MEMORY.
Added config option CONFIG_APP_SHARED_MEM to enable a new section
app_smem to contain the shared memory component. This commit
seperates the Kconfig definition from the definition used for the
conditional code. The change is in response to changes in the
way the build system treats definitions. The python script used
to generate a linker script for app_smem was also midified to
simplify the alignment directives. A default linker script
app_smem.ld was added to remove the conditional includes dependency
on CONFIG_APP_SHARED_MEM. By addining the default linker script
the prebuild stages link properly prior to the python script running
Signed-off-by: Joshua Domagalski <jedomag@tycho.nsa.gov>
Signed-off-by: Shawn Mosley <smmosle@tycho.nsa.gov>
2018-04-26 10:14:02 -04:00
|
|
|
MMU_BOOT_REGION((u32_t)&_app_smem_start, (u32_t)&_app_smem_size,
|
2019-02-08 09:10:40 -08:00
|
|
|
MMU_ENTRY_WRITE | MMU_ENTRY_RUNTIME_USER |
|
|
|
|
MMU_ENTRY_EXECUTE_DISABLE);
|
userspace: compartmentalized app memory organization
Summary: revised attempt at addressing issue 6290. The
following provides an alternative to using
CONFIG_APPLICATION_MEMORY by compartmentalizing data into
Memory Domains. Dependent on MPU limitations, supports
compartmentalized Memory Domains for 1...N logical
applications. This is considered an initial attempt at
designing flexible compartmentalized Memory Domains for
multiple logical applications and, with the provided python
script and edited CMakeLists.txt, provides support for power
of 2 aligned MPU architectures.
Overview: The current patch uses qualifiers to group data into
subsections. The qualifier usage allows for dynamic subsection
creation and affords the developer a large amount of flexibility
in the grouping, naming, and size of the resulting partitions and
domains that are built on these subsections. By additional macro
calls, functions are created that help calculate the size,
address, and permissions for the subsections and enable the
developer to control application data in specified partitions and
memory domains.
Background: Initial attempts focused on creating a single
section in the linker script that then contained internally
grouped variables/data to allow MPU/MMU alignment and protection.
This did not provide additional functionality beyond
CONFIG_APPLICATION_MEMORY as we were unable to reliably group
data or determine their grouping via exported linker symbols.
Thus, the resulting decision was made to dynamically create
subsections using the current qualifier method. An attempt to
group the data by object file was tested, but found that this
broke applications such as ztest where two object files are
created: ztest and main. This also creates an issue of grouping
the two object files together in the same memory domain while
also allowing for compartmenting other data among threads.
Because it is not possible to know a) the name of the partition
and thus the symbol in the linker, b) the size of all the data
in the subsection, nor c) the overall number of partitions
created by the developer, it was not feasible to align the
subsections at compile time without using dynamically generated
linker script for MPU architectures requiring power of 2
alignment.
In order to provide support for MPU architectures that require a
power of 2 alignment, a python script is run at build prior to
when linker_priv_stacks.cmd is generated. This script scans the
built object files for all possible partitions and the names given
to them. It then generates a linker file (app_smem.ld) that is
included in the main linker.ld file. This app_smem.ld allows the
compiler and linker to then create each subsection and align to
the next power of 2.
Usage:
- Requires: app_memory/app_memdomain.h .
- _app_dmem(id) marks a variable to be placed into a data
section for memory partition id.
- _app_bmem(id) marks a variable to be placed into a bss
section for memory partition id.
- These are seen in the linker.map as "data_smem_id" and
"data_smem_idb".
- To create a k_mem_partition, call the macro
app_mem_partition(part0) where "part0" is the name then used to
refer to that partition. This macro only creates a function and
necessary data structures for the later "initialization".
- To create a memory domain for the partition, the macro
app_mem_domain(dom0) is called where "dom0" is the name then
used for the memory domain.
- To initialize the partition (effectively adding the partition
to a linked list), init_part_part0() is called. This is followed
by init_app_memory(), which walks all partitions in the linked
list and calculates the sizes for each partition.
- Once the partition is initialized, the domain can be
initialized with init_domain_dom0(part0) which initializes the
domain with partition part0.
- After the domain has been initialized, the current thread
can be added using add_thread_dom0(k_current_get()).
- The code used in ztests ans kernel/init has been added under
a conditional #ifdef to isolate the code from other tests.
The userspace test CMakeLists.txt file has commands to insert
the CONFIG_APP_SHARED_MEM definition into the required build
targets.
Example:
/* create partition at top of file outside functions */
app_mem_partition(part0);
/* create domain */
app_mem_domain(dom0);
_app_dmem(dom0) int var1;
_app_bmem(dom0) static volatile int var2;
int main()
{
init_part_part0();
init_app_memory();
init_domain_dom0(part0);
add_thread_dom0(k_current_get());
...
}
- If multiple partitions are being created, a variadic
preprocessor macro can be used as provided in
app_macro_support.h:
FOR_EACH(app_mem_partition, part0, part1, part2);
or, for multiple domains, similarly:
FOR_EACH(app_mem_domain, dom0, dom1);
Similarly, the init_part_* can also be used in the macro:
FOR_EACH(init_part, part0, part1, part2);
Testing:
- This has been successfully tested on qemu_x86 and the
ARM frdm_k64f board. It compiles and builds power of 2
aligned subsections for the linker script on the 96b_carbon
boards. These power of 2 alignments have been checked by
hand and are viewable in the zephyr.map file that is
produced during build. However, due to a shortage of
available MPU regions on the 96b_carbon board, we are unable
to test this.
- When run on the 96b_carbon board, the test suite will
enter execution, but each individaul test will fail due to
an MPU FAULT. This is expected as the required number of
MPU regions exceeds the number allowed due to the static
allocation. As the MPU driver does not detect this issue,
the fault occurs because the data being accessed has been
placed outside the active MPU region.
- This now compiles successfully for the ARC boards
em_starterkit_em7d and em_starterkit_em7d_v22. However,
as we lack ARC hardware to run this build on, we are unable
to test this build.
Current known issues:
1) While the script and edited CMakeLists.txt creates the
ability to align to the next power of 2, this does not
address the shortage of available MPU regions on certain
devices (e.g. 96b_carbon). In testing the APB and PPB
regions were commented out.
2) checkpatch.pl lists several issues regarding the
following:
a) Complex macros. The FOR_EACH macros as defined in
app_macro_support.h are listed as complex macros needing
parentheses. Adding parentheses breaks their
functionality, and we have otherwise been unable to
resolve the reported error.
b) __aligned() preferred. The _app_dmem_pad() and
_app_bmem_pad() macros give warnings that __aligned()
is preferred. Prior iterations had this implementation,
which resulted in errors due to "complex macros".
c) Trailing semicolon. The macro init_part(name) has
a trailing semicolon as the semicolon is needed for the
inlined macro call that is generated when this macro
expands.
Update: updated to alternative CONFIG_APPLCATION_MEMORY.
Added config option CONFIG_APP_SHARED_MEM to enable a new section
app_smem to contain the shared memory component. This commit
seperates the Kconfig definition from the definition used for the
conditional code. The change is in response to changes in the
way the build system treats definitions. The python script used
to generate a linker script for app_smem was also midified to
simplify the alignment directives. A default linker script
app_smem.ld was added to remove the conditional includes dependency
on CONFIG_APP_SHARED_MEM. By addining the default linker script
the prebuild stages link properly prior to the python script running
Signed-off-by: Joshua Domagalski <jedomag@tycho.nsa.gov>
Signed-off-by: Shawn Mosley <smmosle@tycho.nsa.gov>
2018-04-26 10:14:02 -04:00
|
|
|
#endif
|
2018-08-31 15:08:31 +05:30
|
|
|
|
|
|
|
#ifdef CONFIG_COVERAGE_GCOV
|
|
|
|
MMU_BOOT_REGION((u32_t)&__gcov_bss_start, (u32_t)&__gcov_bss_size,
|
|
|
|
MMU_ENTRY_WRITE | MMU_ENTRY_USER | MMU_ENTRY_EXECUTE_DISABLE);
|
|
|
|
#endif
|
|
|
|
|
2017-08-01 15:22:06 -07:00
|
|
|
/* __kernel_ram_size includes all unused memory, which is used for heaps.
|
|
|
|
* User threads cannot access this unless granted at runtime. This is done
|
|
|
|
* automatically for stacks.
|
|
|
|
*/
|
|
|
|
MMU_BOOT_REGION((u32_t)&__kernel_ram_start, (u32_t)&__kernel_ram_size,
|
2017-09-18 17:07:16 +05:30
|
|
|
MMU_ENTRY_WRITE |
|
|
|
|
MMU_ENTRY_RUNTIME_USER |
|
|
|
|
MMU_ENTRY_EXECUTE_DISABLE);
|
2017-08-01 15:22:06 -07:00
|
|
|
|
2017-07-17 12:07:56 +05:30
|
|
|
|
2019-03-08 14:19:05 -07:00
|
|
|
void z_x86_mmu_get_flags(struct x86_mmu_pdpt *pdpt, void *addr,
|
2017-09-18 17:07:16 +05:30
|
|
|
x86_page_entry_data_t *pde_flags,
|
|
|
|
x86_page_entry_data_t *pte_flags)
|
2017-08-01 13:04:43 -07:00
|
|
|
{
|
2019-02-06 16:51:38 -08:00
|
|
|
*pde_flags =
|
|
|
|
(x86_page_entry_data_t)(X86_MMU_GET_PDE(pdpt, addr)->value &
|
|
|
|
~(x86_page_entry_data_t)MMU_PDE_PAGE_TABLE_MASK);
|
2017-08-01 13:04:43 -07:00
|
|
|
|
2018-12-16 19:26:27 -08:00
|
|
|
if ((*pde_flags & MMU_ENTRY_PRESENT) != 0) {
|
2017-09-18 17:07:16 +05:30
|
|
|
*pte_flags = (x86_page_entry_data_t)
|
2019-02-06 16:51:38 -08:00
|
|
|
(X86_MMU_GET_PTE(pdpt, addr)->value &
|
2017-09-18 17:07:16 +05:30
|
|
|
~(x86_page_entry_data_t)MMU_PTE_PAGE_MASK);
|
2017-10-16 16:01:33 -07:00
|
|
|
} else {
|
|
|
|
*pte_flags = 0;
|
|
|
|
}
|
2017-08-01 13:04:43 -07:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2019-03-08 14:19:05 -07:00
|
|
|
int z_arch_buffer_validate(void *addr, size_t size, int write)
|
2017-07-17 12:07:56 +05:30
|
|
|
{
|
|
|
|
u32_t start_pde_num;
|
|
|
|
u32_t end_pde_num;
|
|
|
|
u32_t starting_pte_num;
|
|
|
|
u32_t ending_pte_num;
|
|
|
|
u32_t pde;
|
|
|
|
u32_t pte;
|
2019-02-05 13:32:49 -08:00
|
|
|
union x86_mmu_pte pte_value;
|
2017-09-18 17:07:16 +05:30
|
|
|
u32_t start_pdpte_num = MMU_PDPTE_NUM(addr);
|
|
|
|
u32_t end_pdpte_num = MMU_PDPTE_NUM((char *)addr + size - 1);
|
|
|
|
u32_t pdpte;
|
2019-02-05 13:32:49 -08:00
|
|
|
struct x86_mmu_pt *pte_address;
|
2019-03-07 16:07:48 -08:00
|
|
|
int ret = -EPERM;
|
2017-07-17 12:07:56 +05:30
|
|
|
|
|
|
|
start_pde_num = MMU_PDE_NUM(addr);
|
|
|
|
end_pde_num = MMU_PDE_NUM((char *)addr + size - 1);
|
2017-08-09 12:23:00 +05:30
|
|
|
starting_pte_num = MMU_PAGE_NUM((char *)addr);
|
2017-07-17 12:07:56 +05:30
|
|
|
|
2017-09-18 17:07:16 +05:30
|
|
|
for (pdpte = start_pdpte_num; pdpte <= end_pdpte_num; pdpte++) {
|
|
|
|
if (pdpte != start_pdpte_num) {
|
2018-11-29 11:09:09 -08:00
|
|
|
start_pde_num = 0U;
|
2017-07-17 12:07:56 +05:30
|
|
|
}
|
|
|
|
|
2017-09-18 17:07:16 +05:30
|
|
|
if (pdpte != end_pdpte_num) {
|
2018-11-29 11:09:09 -08:00
|
|
|
end_pde_num = 0U;
|
2017-07-17 12:07:56 +05:30
|
|
|
} else {
|
2017-09-18 17:07:16 +05:30
|
|
|
end_pde_num = MMU_PDE_NUM((char *)addr + size - 1);
|
2017-07-17 12:07:56 +05:30
|
|
|
}
|
|
|
|
|
2019-02-12 14:28:58 -08:00
|
|
|
/* Ensure page directory pointer table entry is present */
|
|
|
|
if (X86_MMU_GET_PDPTE_INDEX(&USER_PDPT, pdpte)->p == 0) {
|
2019-03-07 16:07:48 -08:00
|
|
|
goto out;
|
2019-02-12 14:28:58 -08:00
|
|
|
}
|
|
|
|
|
2019-02-05 13:32:49 -08:00
|
|
|
struct x86_mmu_pd *pd_address =
|
2019-02-06 15:35:24 -08:00
|
|
|
X86_MMU_GET_PD_ADDR_INDEX(&USER_PDPT, pdpte);
|
2019-02-05 13:32:49 -08:00
|
|
|
|
2017-09-18 17:07:16 +05:30
|
|
|
/* Iterate for all the pde's the buffer might take up.
|
|
|
|
* (depends on the size of the buffer and start address
|
|
|
|
* of the buff)
|
2017-07-17 12:07:56 +05:30
|
|
|
*/
|
2017-09-18 17:07:16 +05:30
|
|
|
for (pde = start_pde_num; pde <= end_pde_num; pde++) {
|
|
|
|
union x86_mmu_pde_pt pde_value =
|
2019-02-05 13:32:49 -08:00
|
|
|
pd_address->entry[pde].pt;
|
2017-07-17 12:07:56 +05:30
|
|
|
|
2019-03-10 21:48:48 -07:00
|
|
|
if ((pde_value.p) == 0 ||
|
|
|
|
(pde_value.us) == 0 ||
|
|
|
|
((write != 0) && (pde_value.rw == 0))) {
|
2019-03-07 16:07:48 -08:00
|
|
|
goto out;
|
2017-09-18 17:07:16 +05:30
|
|
|
}
|
|
|
|
|
2019-02-05 13:32:49 -08:00
|
|
|
pte_address = (struct x86_mmu_pt *)
|
|
|
|
(pde_value.pt << MMU_PAGE_SHIFT);
|
2017-09-18 17:07:16 +05:30
|
|
|
|
|
|
|
/* loop over all the possible page tables for the
|
|
|
|
* required size. If the pde is not the last one
|
2019-02-28 10:23:13 -08:00
|
|
|
* then the last pte would be 511. So each pde
|
2019-05-02 10:04:18 +02:00
|
|
|
* will be using all the page table entries except
|
2017-09-18 17:07:16 +05:30
|
|
|
* for the last pde. For the last pde, pte is
|
|
|
|
* calculated using the last memory address
|
|
|
|
* of the buffer.
|
|
|
|
*/
|
|
|
|
if (pde != end_pde_num) {
|
2019-02-28 10:23:13 -08:00
|
|
|
ending_pte_num = 511U;
|
2017-09-18 17:07:16 +05:30
|
|
|
} else {
|
|
|
|
ending_pte_num =
|
|
|
|
MMU_PAGE_NUM((char *)addr + size - 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* For all the pde's appart from the starting pde,
|
|
|
|
* will have the start pte number as zero.
|
|
|
|
*/
|
|
|
|
if (pde != start_pde_num) {
|
2018-11-29 11:09:09 -08:00
|
|
|
starting_pte_num = 0U;
|
2017-09-18 17:07:16 +05:30
|
|
|
}
|
|
|
|
|
2019-03-10 21:48:48 -07:00
|
|
|
pte_value.value = 0xFFFFFFFFU;
|
2017-09-18 17:07:16 +05:30
|
|
|
|
|
|
|
/* Bitwise AND all the pte values.
|
|
|
|
* An optimization done to make sure a compare is
|
|
|
|
* done only once.
|
|
|
|
*/
|
|
|
|
for (pte = starting_pte_num;
|
|
|
|
pte <= ending_pte_num;
|
|
|
|
pte++) {
|
|
|
|
pte_value.value &=
|
|
|
|
pte_address->entry[pte].value;
|
|
|
|
}
|
|
|
|
|
2019-03-10 21:48:48 -07:00
|
|
|
if ((pte_value.p) == 0 ||
|
|
|
|
(pte_value.us) == 0 ||
|
|
|
|
((write != 0) && (pte_value.rw == 0))) {
|
2019-03-07 16:07:48 -08:00
|
|
|
goto out;
|
2017-09-18 17:07:16 +05:30
|
|
|
}
|
2017-09-11 13:40:21 -07:00
|
|
|
}
|
2017-07-17 12:07:56 +05:30
|
|
|
}
|
2019-03-07 16:07:48 -08:00
|
|
|
ret = 0;
|
|
|
|
out:
|
|
|
|
#ifdef CONFIG_BOUNDS_CHECK_BYPASS_MITIGATION
|
|
|
|
__asm__ volatile ("lfence" : : : "memory");
|
|
|
|
#endif
|
2017-07-17 12:07:56 +05:30
|
|
|
|
2019-03-07 16:07:48 -08:00
|
|
|
return ret;
|
2017-07-17 12:07:56 +05:30
|
|
|
}
|
2017-07-14 17:39:47 -07:00
|
|
|
|
|
|
|
static inline void tlb_flush_page(void *addr)
|
|
|
|
{
|
|
|
|
/* Invalidate TLB entries corresponding to the page containing the
|
|
|
|
* specified address
|
|
|
|
*/
|
|
|
|
char *page = (char *)addr;
|
2017-09-18 17:07:16 +05:30
|
|
|
|
2017-07-14 17:39:47 -07:00
|
|
|
__asm__ ("invlpg %0" :: "m" (*page));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2019-03-08 14:19:05 -07:00
|
|
|
void z_x86_mmu_set_flags(struct x86_mmu_pdpt *pdpt, void *ptr,
|
2017-09-18 17:07:16 +05:30
|
|
|
size_t size,
|
|
|
|
x86_page_entry_data_t flags,
|
|
|
|
x86_page_entry_data_t mask)
|
2017-07-14 17:39:47 -07:00
|
|
|
{
|
|
|
|
union x86_mmu_pte *pte;
|
|
|
|
|
|
|
|
u32_t addr = (u32_t)ptr;
|
|
|
|
|
2019-03-10 21:48:48 -07:00
|
|
|
__ASSERT((addr & MMU_PAGE_MASK) == 0U, "unaligned address provided");
|
|
|
|
__ASSERT((size & MMU_PAGE_MASK) == 0U, "unaligned size provided");
|
2017-07-14 17:39:47 -07:00
|
|
|
|
2019-03-01 11:42:03 -08:00
|
|
|
/* L1TF mitigation: non-present PTEs will have address fields
|
|
|
|
* zeroed. Expand the mask to include address bits if we are changing
|
|
|
|
* the present bit.
|
|
|
|
*/
|
|
|
|
if ((mask & MMU_PTE_P_MASK) != 0) {
|
|
|
|
mask |= MMU_PTE_PAGE_MASK;
|
|
|
|
}
|
|
|
|
|
2018-12-16 19:26:27 -08:00
|
|
|
while (size != 0) {
|
2019-03-02 11:40:23 -08:00
|
|
|
x86_page_entry_data_t cur_flags = flags;
|
2017-07-14 17:39:47 -07:00
|
|
|
|
2017-09-18 17:07:16 +05:30
|
|
|
/* TODO we're not generating 2MB entries at the moment */
|
2019-02-06 16:51:38 -08:00
|
|
|
__ASSERT(X86_MMU_GET_PDE(pdpt, addr)->ps != 1, "2MB PDE found");
|
|
|
|
pte = X86_MMU_GET_PTE(pdpt, addr);
|
2017-07-14 17:39:47 -07:00
|
|
|
|
2019-03-01 11:42:03 -08:00
|
|
|
/* If we're setting the present bit, restore the address
|
|
|
|
* field. If we're clearing it, then the address field
|
|
|
|
* will be zeroed instead, mapping the PTE to the NULL page.
|
|
|
|
*/
|
|
|
|
if (((mask & MMU_PTE_P_MASK) != 0) &&
|
|
|
|
((flags & MMU_ENTRY_PRESENT) != 0)) {
|
2019-03-02 11:40:23 -08:00
|
|
|
cur_flags |= addr;
|
2019-03-01 11:42:03 -08:00
|
|
|
}
|
|
|
|
|
2019-03-02 11:40:23 -08:00
|
|
|
pte->value = (pte->value & ~mask) | cur_flags;
|
2017-07-14 17:39:47 -07:00
|
|
|
tlb_flush_page((void *)addr);
|
|
|
|
|
|
|
|
size -= MMU_PAGE_SIZE;
|
|
|
|
addr += MMU_PAGE_SIZE;
|
|
|
|
}
|
|
|
|
}
|
2017-10-09 11:16:53 +05:30
|
|
|
|
|
|
|
#ifdef CONFIG_X86_USERSPACE
|
2019-02-06 15:35:24 -08:00
|
|
|
void z_x86_reset_pages(void *start, size_t size)
|
|
|
|
{
|
|
|
|
#ifdef CONFIG_X86_KPTI
|
|
|
|
/* Clear both present bit and access flags. Only applies
|
|
|
|
* to threads running in user mode.
|
|
|
|
*/
|
2019-03-08 14:19:05 -07:00
|
|
|
z_x86_mmu_set_flags(&z_x86_user_pdpt, start, size,
|
2019-02-06 15:35:24 -08:00
|
|
|
MMU_ENTRY_NOT_PRESENT,
|
|
|
|
K_MEM_PARTITION_PERM_MASK | MMU_PTE_P_MASK);
|
|
|
|
#else
|
|
|
|
/* Mark as supervisor read-write, user mode no access */
|
2019-03-08 14:19:05 -07:00
|
|
|
z_x86_mmu_set_flags(&z_x86_kernel_pdpt, start, size,
|
2019-02-06 15:35:24 -08:00
|
|
|
K_MEM_PARTITION_P_RW_U_NA,
|
|
|
|
K_MEM_PARTITION_PERM_MASK);
|
|
|
|
#endif /* CONFIG_X86_KPTI */
|
|
|
|
}
|
2017-10-09 11:16:53 +05:30
|
|
|
|
2019-03-01 15:58:14 -08:00
|
|
|
static inline void activate_partition(struct k_mem_partition *partition)
|
|
|
|
{
|
|
|
|
/* Set the partition attributes */
|
|
|
|
u64_t attr, mask;
|
|
|
|
|
|
|
|
#if CONFIG_X86_KPTI
|
|
|
|
attr = partition->attr | MMU_ENTRY_PRESENT;
|
|
|
|
mask = K_MEM_PARTITION_PERM_MASK | MMU_PTE_P_MASK;
|
|
|
|
#else
|
|
|
|
attr = partition->attr;
|
|
|
|
mask = K_MEM_PARTITION_PERM_MASK;
|
|
|
|
#endif /* CONFIG_X86_KPTI */
|
|
|
|
|
2019-03-08 14:19:05 -07:00
|
|
|
z_x86_mmu_set_flags(&USER_PDPT,
|
|
|
|
(void *)partition->start,
|
|
|
|
partition->size, attr, mask);
|
2019-03-01 15:58:14 -08:00
|
|
|
}
|
|
|
|
|
2017-10-09 11:16:53 +05:30
|
|
|
/* Helper macros needed to be passed to x86_update_mem_domain_pages */
|
|
|
|
#define X86_MEM_DOMAIN_SET_PAGES (0U)
|
|
|
|
#define X86_MEM_DOMAIN_RESET_PAGES (1U)
|
|
|
|
/* Pass 1 to page_conf if reset of mem domain pages is needed else pass a 0*/
|
2019-03-14 09:20:46 -06:00
|
|
|
static inline void x86_mem_domain_pages_update(struct k_mem_domain *mem_domain,
|
2017-10-09 11:16:53 +05:30
|
|
|
u32_t page_conf)
|
|
|
|
{
|
|
|
|
u32_t partition_index;
|
|
|
|
u32_t total_partitions;
|
2019-03-01 15:58:14 -08:00
|
|
|
struct k_mem_partition *partition;
|
2017-10-09 11:16:53 +05:30
|
|
|
u32_t partitions_count;
|
|
|
|
|
|
|
|
/* If mem_domain doesn't point to a valid location return.*/
|
|
|
|
if (mem_domain == NULL) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get the total number of partitions*/
|
|
|
|
total_partitions = mem_domain->num_partitions;
|
|
|
|
|
|
|
|
/* Iterate over all the partitions for the given mem_domain
|
2019-05-02 10:04:18 +02:00
|
|
|
* For x86: iterate over all the partitions and set the
|
2017-10-09 11:16:53 +05:30
|
|
|
* required flags in the correct MMU page tables.
|
|
|
|
*/
|
2018-11-29 11:09:09 -08:00
|
|
|
partitions_count = 0U;
|
|
|
|
for (partition_index = 0U;
|
2017-10-09 11:16:53 +05:30
|
|
|
partitions_count < total_partitions;
|
|
|
|
partition_index++) {
|
|
|
|
|
|
|
|
/* Get the partition info */
|
2019-03-01 15:58:14 -08:00
|
|
|
partition = &mem_domain->partitions[partition_index];
|
2019-03-26 19:57:45 -06:00
|
|
|
if (partition->size == 0U) {
|
2017-10-09 11:16:53 +05:30
|
|
|
continue;
|
|
|
|
}
|
|
|
|
partitions_count++;
|
|
|
|
if (page_conf == X86_MEM_DOMAIN_SET_PAGES) {
|
2019-03-01 15:58:14 -08:00
|
|
|
activate_partition(partition);
|
2017-10-09 11:16:53 +05:30
|
|
|
} else {
|
2019-03-01 15:58:14 -08:00
|
|
|
z_x86_reset_pages((void *)partition->start,
|
|
|
|
partition->size);
|
2017-10-09 11:16:53 +05:30
|
|
|
}
|
|
|
|
}
|
2019-03-01 15:58:14 -08:00
|
|
|
out:
|
2017-10-09 11:16:53 +05:30
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2017-12-06 16:44:20 +05:30
|
|
|
/* Load the partitions of the thread. */
|
2019-03-08 14:19:05 -07:00
|
|
|
void z_arch_mem_domain_configure(struct k_thread *thread)
|
2017-12-06 16:44:20 +05:30
|
|
|
{
|
2019-03-14 09:20:46 -06:00
|
|
|
x86_mem_domain_pages_update(thread->mem_domain_info.mem_domain,
|
2017-12-06 16:48:28 +05:30
|
|
|
X86_MEM_DOMAIN_SET_PAGES);
|
2017-12-06 16:44:20 +05:30
|
|
|
}
|
|
|
|
|
2017-10-09 11:16:53 +05:30
|
|
|
/* Destroy or reset the mmu page tables when necessary.
|
|
|
|
* Needed when either swap takes place or k_mem_domain_destroy is called.
|
|
|
|
*/
|
2019-03-08 14:19:05 -07:00
|
|
|
void z_arch_mem_domain_destroy(struct k_mem_domain *domain)
|
2017-10-09 11:16:53 +05:30
|
|
|
{
|
2019-03-14 09:20:46 -06:00
|
|
|
x86_mem_domain_pages_update(domain, X86_MEM_DOMAIN_RESET_PAGES);
|
2017-10-09 11:16:53 +05:30
|
|
|
}
|
|
|
|
|
2019-05-02 10:04:18 +02:00
|
|
|
/* Reset/destroy one partition specified in the argument of the API. */
|
2019-03-08 14:19:05 -07:00
|
|
|
void z_arch_mem_domain_partition_remove(struct k_mem_domain *domain,
|
|
|
|
u32_t partition_id)
|
2017-10-09 11:16:53 +05:30
|
|
|
{
|
2019-03-01 15:58:14 -08:00
|
|
|
struct k_mem_partition *partition;
|
2017-10-09 11:16:53 +05:30
|
|
|
|
2019-03-01 15:58:14 -08:00
|
|
|
__ASSERT_NO_MSG(domain != NULL);
|
|
|
|
__ASSERT(partition_id <= domain->num_partitions,
|
|
|
|
"invalid partitions");
|
2017-10-09 11:16:53 +05:30
|
|
|
|
2019-03-01 15:58:14 -08:00
|
|
|
partition = &domain->partitions[partition_id];
|
|
|
|
z_x86_reset_pages((void *)partition->start, partition->size);
|
|
|
|
}
|
|
|
|
|
2019-05-02 10:04:18 +02:00
|
|
|
/* Reset/destroy one partition specified in the argument of the API. */
|
2019-03-01 15:58:14 -08:00
|
|
|
void _arch_mem_domain_partition_add(struct k_mem_domain *domain,
|
|
|
|
u32_t partition_id)
|
|
|
|
{
|
|
|
|
struct k_mem_partition *partition;
|
|
|
|
|
|
|
|
__ASSERT_NO_MSG(domain != NULL);
|
2018-11-16 17:58:06 -08:00
|
|
|
__ASSERT(partition_id <= domain->num_partitions,
|
2017-10-09 11:16:53 +05:30
|
|
|
"invalid partitions");
|
|
|
|
|
2019-03-01 15:58:14 -08:00
|
|
|
partition = &domain->partitions[partition_id];
|
|
|
|
activate_partition(partition);
|
2017-10-09 11:16:53 +05:30
|
|
|
}
|
|
|
|
|
2019-03-08 14:19:05 -07:00
|
|
|
int z_arch_mem_domain_max_partitions_get(void)
|
2017-10-09 11:16:53 +05:30
|
|
|
{
|
|
|
|
return CONFIG_MAX_DOMAIN_PARTITIONS;
|
|
|
|
}
|
|
|
|
#endif /* CONFIG_X86_USERSPACE*/
|