Mostly a revert of commit b1def7145f ("arch: deprecate `_current`").
This commit was part of PR #80716 whose initial purpose was about providing
an architecture specific optimization for _current. The actual deprecation
was sneaked in later on without proper discussion.
The Zephyr core always used _current before and that was fine. It is quite
prevalent as well and the alternative is proving rather verbose.
Furthermore, as a concept, the "current thread" is not something that is
necessarily architecture specific. Therefore the primary abstraction
should not carry the arch_ prefix.
Hence this revert.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
`_current` is now functionally equals to `arch_curr_thread()`, remove
its usage in-tree and deprecate it instead of removing it outright,
as it has been with us since forever.
Signed-off-by: Yong Cong Sin <ycsin@meta.com>
Signed-off-by: Yong Cong Sin <yongcong.sin@gmail.com>
This is part of a series of move memory management related
stuff out of Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Also any demand paging and page frame related bits are
renamed.
This is part of a series to move memory management related
stuff out of the Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management related
stuff from Z_ namespace into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is part of a series to move memory management functions
away from the z_ namespace and into its own namespace. Also
make documentation available via doxygen.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Introduce z_page_frame_set() and z_page_frame_clear() to manipulate
flags. Obtain the virtual address using the existing
z_page_frame_to_virt(). This will make changes to the page frame
structure easier.
Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
This moves the k_* memory management functions from sys/ into
kernel/ includes, as there are kernel public APIs. The z_*
functions are further separated into the kernel internal
header directory.
Also made a quick change to doxygen to group sys_mem_* into
the OS Memory Management group so they will appear in doc.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The init infrastructure, found in `init.h`, is currently used by:
- `SYS_INIT`: to call functions before `main`
- `DEVICE_*`: to initialize devices
They are all sorted according to an initialization level + a priority.
`SYS_INIT` calls are really orthogonal to devices, however, the required
function signature requires a `const struct device *dev` as a first
argument. The only reason for that is because the same init machinery is
used by devices, so we have something like:
```c
struct init_entry {
int (*init)(const struct device *dev);
/* only set by DEVICE_*, otherwise NULL */
const struct device *dev;
}
```
As a result, we end up with such weird/ugly pattern:
```c
static int my_init(const struct device *dev)
{
/* always NULL! add ARG_UNUSED to avoid compiler warning */
ARG_UNUSED(dev);
...
}
```
This is really a result of poor internals isolation. This patch proposes
a to make init entries more flexible so that they can accept sytem
initialization calls like this:
```c
static int my_init(void)
{
...
}
```
This is achieved using a union:
```c
union init_function {
/* for SYS_INIT, used when init_entry.dev == NULL */
int (*sys)(void);
/* for DEVICE*, used when init_entry.dev != NULL */
int (*dev)(const struct device *dev);
};
struct init_entry {
/* stores init function (either for SYS_INIT or DEVICE*)
union init_function init_fn;
/* stores device pointer for DEVICE*, NULL for SYS_INIT. Allows
* to know which union entry to call.
*/
const struct device *dev;
}
```
This solution **does not increase ROM usage**, and allows to offer clean
public APIs for both SYS_INIT and DEVICE*. Note that however, init
machinery keeps a coupling with devices.
**NOTE**: This is a breaking change! All `SYS_INIT` functions will need
to be converted to the new signature. See the script offered in the
following commit.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
init: convert SYS_INIT functions to the new signature
Conversion scripted using scripts/utils/migrate_sys_init.py.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
manifest: update projects for SYS_INIT changes
Update modules with updated SYS_INIT calls:
- hal_ti
- lvgl
- sof
- TraceRecorderSource
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: devicetree: devices: adjust test
Adjust test according to the recently introduced SYS_INIT
infrastructure.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
tests: kernel: threads: adjust SYS_INIT call
Adjust to the new signature: int (*init_fn)(void);
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
MISRA C:2012 Rule 14.4 (The controlling expression of an if statement
and the controlling expression of an iteration-statement shall have
essentially Boolean type.)
Use `do { ... } while (false)' instead of `do { ... } while (0)'.
Use comparisons with zero instead of implicitly testing integers.
Use comparisons with NULL instead of implicitly testing pointers.
Use comparisons with NUL instead of implicitly testing plain chars.
This commit is a subset of the original auditable-branch commit:
5d02614e34a86b549c7707d3d9f0984bc3a5f22a
Signed-off-by: Simon Hein <SHein@baumer.com>
MISRA C:2012 Rule 21.13 (Any value passed to a function in <ctype.h>
shall be representable as an unsigned char or be the value EOF).
Functions in <ctype.h> have undefined behavior if they are called with
any other value. Callers affected by this change are not prepared to
handle EOF anyway. The addition of these casts avoids the issue
and does not result in any performance penalty.
Signed-off-by: Abramo Bagnara <abramo.bagnara@bugseng.com>
Signed-off-by: Simon Hein <SHein@baumer.com>
Logging v1 has been removed and log_strdup wrapper function is no
longer needed. Removing the function and its use in the tree.
Signed-off-by: Krzysztof Chruscinski <krzysztof.chruscinski@nordicsemi.no>
In order to bring consistency in-tree, migrate all arch code to the new
prefix <zephyr/...>. Note that the conversion has been scripted, refer
to zephyrproject-rtos#45388 for more details.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
This changes the arch_mem_domain_*() functions to return errors.
This allows the callers a chance to recover if needed.
Note that:
() For assertions where it can bail out early without side
effects, these are converted to CHECKIF(). (Usually means
that updating of page tables or translation tables has not
been started yet.)
() Other assertions are retained to signal fatal errors during
development.
() The additional CHECKIF() are structured so that it will bail
early if possible. If errors are encountered inside a loop,
it will still continue with the loop so it works as before
this changes with assertions disabled.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This function should be pinned in memory instead of simply
putting it in the boot section, as this function will be
used when new threads are created at runtime.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
According to the Zephyr Coding Guideline all switch statements
shall be well-formed.
Add a comment to the empty default case.
Add a LOG_ERR to the default case.
Found as a coding guideline violation (MISRA R16.1) by static
coding scanning tool.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
Essential type of RHS operand (64 bit) is wider than essential
type of composite expression in LHS operand (32 bit).
LHS entry_val is 32 bit, and RHS (phys+offset) is 64 bit.
Cast RHS composite expression to the (pentry_t) type.
Found as a coding guideline violation (MISRA R10.7) by static
coding scanning tool.
Signed-off-by: Maksim Masalski <maksim.masalski@intel.com>
This marks code and data within x86/ia32 so they are going to
reside in boot and pinned regions. This is a step to enable
demand paging for whole kernel.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This implements arch_page_phys_get() to translate mapped
virtual addresses back to physical addresses.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
In z_mem_manage_init(), z_free_page_count is only manipulated
after all reserved pages are marked, and will reflect
the actual number of page frames being added to the free page
frame list. Manipulating z_free_page_count before this is
going to mess up the accounting, so remove the code to
decrement z_free_page_count in arch_reserved_pages_update()
under x86.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There was a restriction that KERNEL_VM_OFFSET must equal to
SRAM_OFFSET so that page directory pointer (PDP) or page
directory (PD) can be reused. This is not very practical in
real world due to various hardware designs, especially those
where SRAM is not aligned to PDP or PD. So rework those bits.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Both operands of an operator in which the usual arithmetic
conversions are performed shall have the same essential
type category.
Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
Tests of a value against zero should be made explicit, unless the
operand is effectively Boolean. This is based on MISRA rule 14.4.
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This changes the assert when a large page is encountered to
copying the page directory entry to the new page directory.
This is needed when a large page entry is generated by
gen_mmu.py. Note that this still asserts when there are entries
of large page at higher level.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The whole page table is pre-allocated at build time and is
dependent on the range of address space. This kconfig allows
reserving extra pages (of size CONFIG_MMU_PAGE_SIZE) to
the page table so that gen_mmu.py can make use of these
extra pages.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
The only user of arch_mem_domain_destroy was the deprecated
k_mem_domain_destroy function which has now been removed. So remove
arch_mem_domain_destroy as well.
Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
This reverts commit 7d32e9f9a5.
We now allow the kernel to be linked virtually. This patch:
- Properly converts between virtual/physical addresses
- Handles early boot instruction pointer transition
- Double-maps SRAM to both virtual and physical locations
in boot page tables to facilitate instruction pointer
transition, with logic to clean this up after completed.
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This reuses the page directory pointer table (PAE=y) or page
directory (PAE=n) to point to next level page directory table
(PAE=y) or page tables (PAE=n) to identity map the physical
memory. This gets rid of the extra memory needed to host
the extra mappings which are only used at boot. Following
patches will have code to actual unmap physical memory
during the boot process, so this avoids some wasting of
memory.
Since no extra memory needs to be reserved, this also reverts
commit ee3d345c09
("x86: mmu: reserve more space for page table if linking in virt").
Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
There is no need to use this kconfig, as the phys-to-virt
offset is enough to figure out if the kernel is linked in
virtual address space in gen_mmu.py.
For code, use Z_VM_KERNEL instead.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
With the introduction of Z_MEM_*_ADDR for physical<->virtual
address translation, there is no need to have x86 specific
versions.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When adding a new thread to memory domain, there is a NULL check
to figure out if a thread is being migrated to another memory
domain. However, the NULL check is AFTER physical-to-virtual
address translation which means (NULL + offset) != NULL anymore.
This results in calling reset_region() with an invalid page table
pointer. Fix this by doing the NULL check before address
translation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When linking in virtual address space, we still need physical
addresses in SRAM to be mapped so platform can boot from physical
memory and to access structure necessary for boot (e.g. GDT and
IDT). So we need to enlarge the reserved space for page table
to accommodate this.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
When feeding &z_shared_kernel_page_start directly to
Z_X86_PHYS_ADDR(), the compiler would complain array subscript
out of bound if linking in virtual address space. So cast it
into uintptr_t first before translation.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
Add a newer, much smaller and simpler implementation of abort and
join. No need to involve the idle thread. No need for a special code
path for self-abort. Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation. All work in both
calls happens under a single locked path with no unexpected
synchronization points.
This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.
Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.
Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This adds a new kconfig to enable the use of memory map.
This map can be populated automatically if
CONFIG_MULTIBOOT_MEMMAP=y or can be manually defined
via x86_memmap[].
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
This is an hidden option to indicate we are building for
PC-compatible devices (where there are BIOS, ACPI, etc.
which are standard on such devices).
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
With x86, there are usually memory regions that are reserved
for firmware and device MMIOs. We don't want to use these
pages for memory mapping so mark them as reserved at boot.
The weakly defined x86_memmap contains the list of memory
regions which can be overriden by SoC or board configurations.
Also, with CONFIG_MULTIBOOT_MEMMAP=y, the memory regions
are populated from multiboot provided data.
Signed-off-by: Daniel Leung <daniel.leung@intel.com>