Rename reserved function names in arch/ subdirectory. The Python
script gen_priv_stacks.py was updated to follow the 'z_' prefix
naming.
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
MISRA defines a serie of essential types, boolean, signed/unsigned
integers, float, ... and operations must respect these essential types.
MISRA-C rule 10.1
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Update reserved function names starting with one underscore, replacing
them as follows:
'_k_' with 'z_'
'_K_' with 'Z_'
'_handler_' with 'z_handl_'
'_Cstart' with 'z_cstart'
'_Swap' with 'z_swap'
This renaming is done on both global and those static function names
in kernel/include and include/. Other static function names in kernel/
are renamed by removing the leading underscore. Other function names
not starting with any prefix listed above are renamed starting with
a 'z_' or 'Z_' prefix.
Function names starting with two or three leading underscores are not
automatcally renamed since these names will collide with the variants
with two or three leading underscores.
Various generator scripts have also been updated as well as perf,
linker and usb files. These are
drivers/serial/uart_handlers.c
include/linker/kobject-text.ld
kernel/include/syscall_handler.h
scripts/gen_kobject_list.py
scripts/gen_syscall_header.py
Signed-off-by: Patrik Flykt <patrik.flykt@intel.com>
We add two points where we add lfences to disable
speculation:
* In the memory buffer validation code, which takes memory
addresses and sizes from userspace and determins whether
this memory is actually accessible.
* In the system call landing site, after the system call ID
has been validated but before it is used.
Kconfigs have been added to enable these checks if the CPU
is not known to be immune on X86.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
On x86, if a supervisor thread belonging to a memory domain
adds a new partition to that domain, subsequent context switches
to another thread in the same domain, or dropping itself to user
mode, does not have the correct setup in the page tables.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We need a copy of the flags field for ever PTE we are
updating, we can't just keep OR-ing in the address
field.
Fixes issues seen when setting flags for memory regions
larger than a page.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
During speculative execution, non-present pages are treated
as valid, which may expose their contents through side
channels.
Any non-present PTE will now have its address bits zeroed,
such that any speculative reads to them will go to the NULL
page.
The expected hit on performance is so minor that this is
enabled at all times.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
PAE page tables (the only kind we support) have 512
entries per page directory, not 1024.
Fixes: #13838
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This is an integral part of userspace and cannot be used
on its own. Fold into the main userspace configuration.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The code wasn't checking if the memory address to check
corresponded to a non-present page directory pointer
table entry.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Upon hard/soft irq or exception entry/exit, handle transitions
off or onto the trampoline stack, which is the only stack that
can be used on the kernel side when the shadow page table
is active. We swap page tables when on this stack.
Adjustments to page tables are now as follows:
- Any adjustments for stack memory access now are always done
to the user page tables
- Any adjustments for memory domains are now always done to
the user page tables
- With KPTI, resetting a page now clears the present bit
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
At boot, user threads were being granted access to the entire
app shared memory section. This is incorrect; user threads should
have no access until they are added to a memory domain, which
may contain partitions defined within it.
Change from MMU_ENTRY_USER (which grants permission at boot)
to MMU_ENTRY_RUNTIME_USER (which indicates that the pages may
be granted to user mode at runtime, but not at boot).
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.
To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This diverges from policy for all of our other arches
and C libraries. Global access to the malloc arena
may not be desirable.
Forthcoming patch will expose, for all C libraries, a
k_mem_partition with the malloc arena which can be
added to domains as desired.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
PAE tables introduce the NX bit which is very desirable
from a security perspetive, back in 1995.
PAE tables are larger, but we are not targeting x86 memory
protection for RAM constrained devices.
Remove the old style 32-bit tables to make the x86 port
easier to maintain.
Renamed some verbosely named data structures, and fixed
incorrect number of entries for the page directory
pointer table.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
This patch adds all the required hooks needed in the kernel to
get the coverage reports from x86 SoCs.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
When __ASSERT is not enabled there is an attribution to the variable
total_partitions and it is never used.
MISRA-C rule 2.2
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Summary: revised attempt at addressing issue 6290. The
following provides an alternative to using
CONFIG_APPLICATION_MEMORY by compartmentalizing data into
Memory Domains. Dependent on MPU limitations, supports
compartmentalized Memory Domains for 1...N logical
applications. This is considered an initial attempt at
designing flexible compartmentalized Memory Domains for
multiple logical applications and, with the provided python
script and edited CMakeLists.txt, provides support for power
of 2 aligned MPU architectures.
Overview: The current patch uses qualifiers to group data into
subsections. The qualifier usage allows for dynamic subsection
creation and affords the developer a large amount of flexibility
in the grouping, naming, and size of the resulting partitions and
domains that are built on these subsections. By additional macro
calls, functions are created that help calculate the size,
address, and permissions for the subsections and enable the
developer to control application data in specified partitions and
memory domains.
Background: Initial attempts focused on creating a single
section in the linker script that then contained internally
grouped variables/data to allow MPU/MMU alignment and protection.
This did not provide additional functionality beyond
CONFIG_APPLICATION_MEMORY as we were unable to reliably group
data or determine their grouping via exported linker symbols.
Thus, the resulting decision was made to dynamically create
subsections using the current qualifier method. An attempt to
group the data by object file was tested, but found that this
broke applications such as ztest where two object files are
created: ztest and main. This also creates an issue of grouping
the two object files together in the same memory domain while
also allowing for compartmenting other data among threads.
Because it is not possible to know a) the name of the partition
and thus the symbol in the linker, b) the size of all the data
in the subsection, nor c) the overall number of partitions
created by the developer, it was not feasible to align the
subsections at compile time without using dynamically generated
linker script for MPU architectures requiring power of 2
alignment.
In order to provide support for MPU architectures that require a
power of 2 alignment, a python script is run at build prior to
when linker_priv_stacks.cmd is generated. This script scans the
built object files for all possible partitions and the names given
to them. It then generates a linker file (app_smem.ld) that is
included in the main linker.ld file. This app_smem.ld allows the
compiler and linker to then create each subsection and align to
the next power of 2.
Usage:
- Requires: app_memory/app_memdomain.h .
- _app_dmem(id) marks a variable to be placed into a data
section for memory partition id.
- _app_bmem(id) marks a variable to be placed into a bss
section for memory partition id.
- These are seen in the linker.map as "data_smem_id" and
"data_smem_idb".
- To create a k_mem_partition, call the macro
app_mem_partition(part0) where "part0" is the name then used to
refer to that partition. This macro only creates a function and
necessary data structures for the later "initialization".
- To create a memory domain for the partition, the macro
app_mem_domain(dom0) is called where "dom0" is the name then
used for the memory domain.
- To initialize the partition (effectively adding the partition
to a linked list), init_part_part0() is called. This is followed
by init_app_memory(), which walks all partitions in the linked
list and calculates the sizes for each partition.
- Once the partition is initialized, the domain can be
initialized with init_domain_dom0(part0) which initializes the
domain with partition part0.
- After the domain has been initialized, the current thread
can be added using add_thread_dom0(k_current_get()).
- The code used in ztests ans kernel/init has been added under
a conditional #ifdef to isolate the code from other tests.
The userspace test CMakeLists.txt file has commands to insert
the CONFIG_APP_SHARED_MEM definition into the required build
targets.
Example:
/* create partition at top of file outside functions */
app_mem_partition(part0);
/* create domain */
app_mem_domain(dom0);
_app_dmem(dom0) int var1;
_app_bmem(dom0) static volatile int var2;
int main()
{
init_part_part0();
init_app_memory();
init_domain_dom0(part0);
add_thread_dom0(k_current_get());
...
}
- If multiple partitions are being created, a variadic
preprocessor macro can be used as provided in
app_macro_support.h:
FOR_EACH(app_mem_partition, part0, part1, part2);
or, for multiple domains, similarly:
FOR_EACH(app_mem_domain, dom0, dom1);
Similarly, the init_part_* can also be used in the macro:
FOR_EACH(init_part, part0, part1, part2);
Testing:
- This has been successfully tested on qemu_x86 and the
ARM frdm_k64f board. It compiles and builds power of 2
aligned subsections for the linker script on the 96b_carbon
boards. These power of 2 alignments have been checked by
hand and are viewable in the zephyr.map file that is
produced during build. However, due to a shortage of
available MPU regions on the 96b_carbon board, we are unable
to test this.
- When run on the 96b_carbon board, the test suite will
enter execution, but each individaul test will fail due to
an MPU FAULT. This is expected as the required number of
MPU regions exceeds the number allowed due to the static
allocation. As the MPU driver does not detect this issue,
the fault occurs because the data being accessed has been
placed outside the active MPU region.
- This now compiles successfully for the ARC boards
em_starterkit_em7d and em_starterkit_em7d_v22. However,
as we lack ARC hardware to run this build on, we are unable
to test this build.
Current known issues:
1) While the script and edited CMakeLists.txt creates the
ability to align to the next power of 2, this does not
address the shortage of available MPU regions on certain
devices (e.g. 96b_carbon). In testing the APB and PPB
regions were commented out.
2) checkpatch.pl lists several issues regarding the
following:
a) Complex macros. The FOR_EACH macros as defined in
app_macro_support.h are listed as complex macros needing
parentheses. Adding parentheses breaks their
functionality, and we have otherwise been unable to
resolve the reported error.
b) __aligned() preferred. The _app_dmem_pad() and
_app_bmem_pad() macros give warnings that __aligned()
is preferred. Prior iterations had this implementation,
which resulted in errors due to "complex macros".
c) Trailing semicolon. The macro init_part(name) has
a trailing semicolon as the semicolon is needed for the
inlined macro call that is generated when this macro
expands.
Update: updated to alternative CONFIG_APPLCATION_MEMORY.
Added config option CONFIG_APP_SHARED_MEM to enable a new section
app_smem to contain the shared memory component. This commit
seperates the Kconfig definition from the definition used for the
conditional code. The change is in response to changes in the
way the build system treats definitions. The python script used
to generate a linker script for app_smem was also midified to
simplify the alignment directives. A default linker script
app_smem.ld was added to remove the conditional includes dependency
on CONFIG_APP_SHARED_MEM. By addining the default linker script
the prebuild stages link properly prior to the python script running
Signed-off-by: Joshua Domagalski <jedomag@tycho.nsa.gov>
Signed-off-by: Shawn Mosley <smmosle@tycho.nsa.gov>
MPU devices that enforce power-of-two alignment now
specify the size of the buffer used for the newlib heap.
This buffer will be properly aligned and a pointer
exposed in a kernel header, such that it can be added
to a user thread's memory domain configuration if
necessary.
MPU devices that don't have these restrictions allocate
the heap as normal.
In all cases, if an MPU/MMU region needs to be programmed,
the z_newlib_get_heap_bounds() API will return the necessary
information.
Given how precious MPU regions are, no automatic programming
of the MPU is done; applications will need to do this as
needed in their memory domain configurations.
On x86, the x86 MMU-specific code has been moved to arch/x86
using the new z_newlib_get_heap_bounds() API.
Fixes: #6814
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
when a current thread is added to a memory domain the pages/sections
must be configured immediately.
A problem occurs when we add a thread to current and then drop
down to usermode. In such a case memory domain will become active
the next time a swap occurs.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Add an architecure specfic code for the memory domain
configuration. This is needed to support a memory domain API
k_mem_domain_add_thread.
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Created structures and unions needed to enable the software to
access these tables.
Also updated the helper macros to ease the usage of the MMU page
tables.
JIRA: ZEP-2511
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Looking up the PTE flags was page faulting if the address wasn't
marked as present in the page directory, since there is no page table
for that directory entry.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
- There's no point in building up "validity" (declared volatile for some
strange reason), just exit with false return value if any of the page
directory or page table checks don't come out as expected
- The function was returning the opposite value as its documentation
(0 on success, -EPERM on failure). Documentation updated.
- This function will only be used to verify buffers from user-space.
There's no need for a flags parameter, the only option that needs to
be passed in is whether the buffer has write permissions or not.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
The value of the PTE (starting_pte_num) was not
calulated correctly. If size of the buffer exceeded 4KB,
the buffer validation API was failing.
JIRA: ZEP-2489
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Configuring the RAM/ROM regions will be the same for all
x86 targets as this is done with linker symbols.
Peripheral configuration left at the SOC level.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Page faults will additionally dump out some interesting
page directory and page table flags for the faulting
memory address.
Intended to help determine whether the page tables have been
configured incorrectly as we enable memory protection features.
This only happens if CONFIG_EXCEPTION_DEBUG is turned on.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
We now create a special IA hardware task for handling
double faults. This has a known good stack so that if
the kernel tries to push stack data onto an unmapped page,
we don't triple-fault and reset the system.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
A user space buffer must be validated before required operation
can proceed. This API will check the current MMU
configuration to determine if the buffer held by the user is valid.
Jira: ZEP-2326
Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>