Commit graph

3183 commits

Author SHA1 Message Date
Andy Ross
a202670c18 kernel/sched: Remove now-spurious SWAP_NONATOMIC workaround
Recent work to normalize use of the thread QUEUED state bit means that
we never attempt to remove unqueued threads from the low-level run
queue.  So the old workaround for SWAP_NONATOMIC that was trying to
detect this condition isn't necessary anymore.

Which is serendipitous, because it was written to encode some very
specific logic about the circumstances where _current could be
dequeued that I'd like to be able to break.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
05c468f594 kernel/sched: Make z_ready_thread() safe vs. already-running threads
This is part of the scheduler API, and was always just a synchronized
wrapper around the internal ready_thread() function.  But where the
internal users seem to be careful not to call it on threads that are
not known to be already queued or running, the general users in the
IPC code seem to be less strict.

Add a simple test to detect the case where a thread is already
running.  Right now this just loops over the array of CPUs, so is O(N)
in the CPU count even though N is never more than four for us
currently.  But this is possible without modifying data structures.  A
more scalable way to do this if we ever need to run on very parallel
systems would be to use another state bit for RUNNING, or to keep a
backpointer in the thread struct to the CPU it's running on, etc...

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
6b84ab3830 kernel/sched: Adjust locking in z_swap()
Swap was originally written to use the scheduler lock just to select a
new thread, but it would be nice to be able to rely on scheduler
atomicity later in the process (in particular it would be nice if the
assignment to cpu.current could be seen atomically).  Rework the code
a bit so that swap takes the lock itself and holds it until just
before the call to arch_switch().

Note that the local interrupt mask has always been required to be held
across the swap, so extending the lock here has no effect on latency
at all on uniprocessor setups, and even on SMP only affects average
latency and not worst case.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
37866336f9 kernel/sched: Fix race between thread wakeup timeout and abort
Aborted threads will cancel their timeouts, but the timeout subsystem
isn't protected under the same lock so it's possible for a timeout to
fire just as a thread is being aborted and wake it up unexpectedly.
Check the state before blowing anything up.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross
8e16012ab7 kernel/thread: Initialize pended_on field of struct thread_base
This got missed, leaving garbage there for restarted threads to trip
on.  Actually I see multiple uninitialized fields, which seems odd.
This code deserves some rework, thread initialization isn't a
performance path and we should probably be zeroing the struct out.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andrei Emeltchenko
377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Daniel Leung
ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung
ec21c0b92f kernel: mmu: fix boot address translation macros
The Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() address
translation macros are flipped in their calculations.
The calculation is supposed to be:

  virt = phys + ((KERNEL_VM_BASE + KERNEL_VM_OFFSET) -
                 SRAM_BASE_ADDRESS)

So fix the them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Anas Nashif
ecdc770c9b kernel: thread: do not assert on tests
remove assert that prevented us from testing non-userspace code on
platforms that can do userspace.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-22 14:36:06 -05:00
Andy Ross
252764a4ba kernel/timeout: Fix off-by-one in absolute timeouts
The computation was using the already-adjusted input value that
assumed relative timeouts and not the actual argument the user passed.
Absolute timeouts were consistently waking up one tick early.

Fixes #32499

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-22 08:45:24 -05:00
Peter Bigot
d554d34137 device: add post-process of elf file to manage device handles
Following the idiom used for system calls, add script support to read
the initial application binary to identify which devices are defined,
and to use their offset in the device array as their unique handle
rather than the externally-defined ordinal from devicetree.  The
device dependency arrays are updated to use these handles.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 15:46:16 -05:00
Peter Bigot
d1a0568e11 device: store device pm busy status in the state structure
Move the busy status from a global atomic bit sequence to atomic flags
in the device PM state.  While this temporarily adds 4 bytes to each
PM structure the whole device PM infrastructure will be refactored and
it's likely the extra memory can be recovered.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Peter Bigot
65eee5cb47 device: store initialization status in the state structure
Separate the state indicator of whether the initialization function
has been invoked from the success or failure of the initialization.
This allows precise confirmation that the device is ready (i.e. it has
been initialized, and that initialization succeeded).

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Peter Bigot
8d771f1d8e device: move device power management state into common dynamic state
This avoids the need for distinct object that uses flash to store its
initializer.  Instead the state is initialized when the kernel is
starting up, before anything can reference it.  In future refactoring
the PM state could be accessed directly without storing an extra
pointer in the static device state.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Peter Bigot
1cadd8b305 device: perform dynamic device initialization during system startup
Initialize all device objects in a batch before invoking any code that
might try to reference data in them.  This eliminates a race condition
enabled by the ability to resolve a device structure at build time,
and reference it from one device's initialization routine before the
device itself has been initialized.

While the device is pulled from the sys_init records rather than
static devices, all in-tree init_entry records that are associated
with devices are produced via Z_DEVICE_DEFINE(), so there should be no
static devices that would be missed by instead iterating over the
device records.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Peter Bigot
5b36a01a67 device: binding lookup should return null for unsupported names
A null device name should map to a null device.  So should a name that
is empty.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-16 14:39:53 -06:00
Andy Ross
c7d0cb6641 include/kernel_arch_interface.h: Redocument arch_switch()
Some recent changes exposed some common "arch_switch() anti-patterns"
in various architectures.  The documentation technically described
this all correctly, but probably wasn't as clear as it should have
been.  Rewrite, making clear exactly what needs to happen and how the
fields should be interpreted.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross
4ff457113e kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.

Example:
   * CPU0 is idle, CPU1 is running thread A
   * CPU1 makes high priority thread B runnable
   * CPU1 reaches a schedule point (or returns from an interrupt) and
     decides to run thread B instead
   * CPU0 simultaneously takes its IPI and returns, selecting thread A

Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable.  So we have a deadlock, each CPU is spinning waiting for the
other!

Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation.  The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears.  I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly.  In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.

The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.

Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross
91946ef21c kernel/sched: Refactor, unify management of QUEUED state
The QUEUED state flag was managed separately from the run queue
insertion/deletion, and the logic (while AFAICT perfectly correct) was
tangled in a few places trying to keep them in sync.  Put the
management of both behind a queue_thread()/dequeue_thread() API for
clarity.  The ALWAYS_INLINE usage seems to be working to get the
compiler to condense the resulting multiple assignments.  No behavior
change.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross
dd43221540 kernel/sched: Fix race with switch handle
The "null out the switch handle and put it back" code in the swap
implementation is a holdover from some defensive coding (not wanting
to break the case where we picked our current thread), but it hides a
subtle SMP race: when that field goes NULL, another CPU that may have
selected that thread (which is to say, our current thread) as its next
to run will be spinning on that to detect when the field goes
non-NULL.  So it will get the signal to move on when we revert the
value, when clearly we are still running on the stack!

In practice this was found on x86 which poisons the switch context
such that it crashes instantly.

Instead, be firm about state and always set the switch handle of a
currently running thread to NULL immediately before it starts running:
right before entering arch_switch() and symmetrically on the interrupt
exit path.

Fixes #28105

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross
1ba7414029 kernel/sched: Correct coherence assert
Some legacy spots in our IPC layer (legally) pass a NULL wait queue to
pend().  Allow this in the coherence assertion.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross
4dc6a0b89b kernel/poll: Remove dummy waitq from stack
The poll code uses a dummy wait queue so the threads have something to
block on, but the previous coherence pass (which rearranged things to
put the _poller data elsewhere) missed that this was on the stack,
which is not allowed.  It actually has no use except as a list, so
make it a global static instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross
1d51e888d8 kernel/z_swap: Remove on-stack dummy spinlock
The z_swap_unlocked() function used a dummy spinlock for simplicity.
But this runs afouls of checking for stack-resident spinlocks
(forbidden when KERNEL_COHERENCE is set).  And it's executing needless
code to release the lock anyway.  Replace with a compile time NULL,
which will improve performance, correctness and code size.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross
604f0f44b6 kernel/sched: Add missing lock around waitq unpend calls
The two calls to unpend a thread from a wait queue were inexplicably*
unsynchronized, as James Harris discovered.  Rework them to call the
lowest level primities so we can wrap the process inside the scheduler
lock.

Fixes #32136

* I took a brief look.  What seems to have happened here is that these
  were originally synchronized via an implicit from an outer caller
  (remember the original Uniprocessor irq_lock() API is a recursive
  lock), and they were mostly implemented in terms of middle-level
  calls that were themselves locked.  So those got ported over to the
  newer spinlock but the outer wrapper layer got forgotten.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-10 07:43:18 -05:00
Daniel Leung
371752bce3 kernel: tls: align tdata/tbss sections in stack
This lets the linker tell us what kind of alignment is required
for both tdata and tbss data when copying them into stack.
If they are not aligned as expected by the toolchain, generated
code would be accessing incorrect location for thread variables.

Fixes #32015

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-07 23:28:43 -05:00
Nicolas Pitre
f9461d1ac4 mmu: fix ARM64 compilation by removing z_mapped_size usage
The linker script defines `z_mapped_size` as follows:

```
	z_mapped_size = z_mapped_end - z_mapped_start;
```

This is done with the belief that precomputed values at link time will
make the code smaller and faster.

On Aarch64, symbol values are relocated and loaded relative to the PC
as those are normally meant to be memory addresses.

Now if you have e.g. `CONFIG_SRAM_BASE_ADDRESS=0x2000000000` then
`z_mapped_size` might still have a reasonable value, say 0x59334.
But, when interpreted as an address, that's very very far from the PC
whose value is in the neighborhood of 0x2000000000. That overflows the
4GB relocation range:

```
kernel/libkernel.a(mmu.c.obj): in function `z_mem_manage_init':
kernel/mmu.c:527:(.text.z_mem_manage_init+0x1c):
relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21
```

The solution is to define `Z_KERNEL_VIRT_SIZE` in terms of
`z_mapped_end - z_mapped_start` at the source code level. Given this
is used within loops that already start with `z_mapped_start` anyway,
the compiler is smart enough to combine the two occurrences and
dispense with a size counter, making the code effectively
slightly better for all while avoiding the Aarch64 relocation
overflow:

```
   text    data     bss     dec     hex filename
   1216       8  294936  296160   484e0 mmu.c.obj.arm64.before
   1212       8  294936  296156   484dc mmu.c.obj.arm64.after
   1110       8    9244   10362    287a mmu.c.obj.x86-64.before
   1106       8    9244   10358    2876 mmu.c.obj.x86-64.after
```

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-05 17:19:56 -05:00
Carlo Caione
302a36a115 kernel: mmu: Fix trivial typos
Otherwise the memory scheme is confusing to read.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-02-04 14:00:36 -05:00
Martin Åberg
612dad264c kernel: Decouple TICKS_PER_SEC from TICKLESS_CAPABLE
The SYS_CLOCK_TICKS_PER_SEC default may depend on the kernel config
for tickless, rather than the capability.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-02-04 12:34:23 -05:00
Ioannis Glaropoulos
40aab3276c Revert "kernel: init: activate FPU for main thread"
Activating K_FP_REGS flags introduces stack memory
overhead for the main thread in Cortex-M architecture.
Several ARM platforms experience main thread stack
overflows when building with FPU_SHARING=y.
Enabling FPU sharing in main thread should not be
the default configuration. Users are welcome to
enable FP sharing on the main thread in the
application code, in main().

This reverts commit 8453a73ede.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-03 17:22:50 -05:00
Anas Nashif
39f632e7f0 kernel: fix usage of KERNEL_COHERENCE macro
Add missing CONFIG_ to KERNEL_COHERENCE usage in code.

Fixes #30380

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-02-03 10:42:04 -05:00
Daniel Leung
d1495e98e2 kernel: fix arch_mem_coherent() call in spinlock
The call to arch_mem_coherent() inside spinlock.h
when spinlock validation and memory coherence enabled
is causing build error as spinlock.h does not include
kernel_arch_func.h directly. However, simply including
that file does not work either as this creates
the chicken-or-egg in the chain of include files.
In order to make spin validation work with kernel
coherence enabled, a separate function is created
to break the circular dependencies of include files.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-03 10:42:04 -05:00
Daniel Leung
079bc64c16 kernel: fix _kernel argument to arch_mem_coherent
Argument to arch_mem_coherent() is a pointer so pass
a pointer to _kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-03 10:42:04 -05:00
Andy Ross
887e1abace kernel/timeout: Fix timeout "sooner" computation
There was an edge case in the timeout handling (exposed by, but not
strictly related to, the recent timeslice fix): the next_timeout()
computation would include time slice expiration as a clamp on the
result, but this would be invoked also on the z_set_timeout_expiry()
path which gets hooked on entry to a new thread which is needed to set
the timeout in the first place.  So if no other timer interrupt was
scheduled, it was possible to miss the first timeslice interrupt after
thread scheduling.

The explanation is much longer than the fix (use <= as the comparator
instead of <).

In practice this was only being hit in the existing test suite on
riscv miv running under renode using non-default clock rates.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-02 17:58:40 -05:00
Andy Ross
544475d8a7 kernel/timeout: Schedule zero-time timeouts
Fix an edge case that snuck in with the recent fix: if timeslicing is
enabled, the CPU's slice_ticks will be zero, and thus match a timeout
object's dticks value of zero, and thus get suppressed (because "we
already have a timeout scheduled for that") incorrectly.

Fixes #31789

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-02 17:58:40 -05:00
Alexandre Bourdiol
8925af94f2 kernel: Kconfig: increase test default MAIN_STACK_SIZE for ARM Cortex M
There are more and more tests that fail due to stackoverflow.
Increasing MAIN_STACK_SIZE to fix those issues.

Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
2021-02-02 10:05:46 -05:00
Flavio Ceolin
148769c715 sched: timeout: Do not miss slice timeouts
Time slices don't have a timeout struct associated and stored in
timeout_list. Time slice timeout is direct programmed in the system
clock and tracked in _current_cpu->slice_ticks.

There is one issue where the time slice timeout can be missed because
the system clock is re-programmed to a longer timeout. To this happens,
it is only necessary that the timeout_list is empty (any timeout set)
and a new timeout longer than remaining time slice is set. This is cause
because z_add_timeout does not check for the slice ticks.

The following example spots the issue:

K_THREAD_STACK_DEFINE(tstack, STACK_SIZE);
K_THREAD_STACK_ARRAY_DEFINE(tstacks, NUM_THREAD, STACK_SIZE);
K_SEM_DEFINE(sema, 0, NUM_THREAD);

static inline void spin_for_ms(int ms)
{
	uint32_t t32 = k_uptime_get_32();

	while (k_uptime_get_32() - t32 < ms) {
	}
}

static void thread_time_slice(void *p1, void *p2, void *p3)
{
	printk("thread[%d] - Before spin\n", (int)(uintptr_t)p1);

	/* Spinning for longer than slice */
	spin_for_ms(SLICE_SIZE + 20);

	/* The following print should not happen before another
	 * same priority thread starts.
	 */
	printk("thread[%d] - After spinning\n", (int)(uintptr_t)p1);
	k_sem_give(&sema);
}

void main(void)
{
	k_tid_t tid[NUM_THREAD];
	struct k_thread t[NUM_THREAD];
	uint32_t slice_ticks = k_ms_to_ticks_ceil32(SLICE_SIZE);
	int old_prio = k_thread_priority_get(k_current_get());

	/* disable timeslice */
	k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));

	for (int j = 0; j < 2; j++) {
		k_sem_reset(&sema);

		/* update priority for current thread */
		k_thread_priority_set(k_current_get(), K_PRIO_PREEMPT(j));

		/* synchronize to tick boundary */
		k_usleep(1);

		/* create delayed threads with equal preemptive priority */
		for (int i = 0; i < NUM_THREAD; i++) {
			tid[i] = k_thread_create(&t[i], tstacks[i], STACK_SIZE,
						 thread_time_slice, (void *)i, NULL,
						 NULL, K_PRIO_PREEMPT(j), 0,
						 K_NO_WAIT);
		}

		/* enable time slice (and reset the counter!) */
		k_sched_time_slice_set(SLICE_SIZE, K_PRIO_PREEMPT(0));

		/* Spins for while to spend this thread time but not longer */
		/* than a slice. This is important  */
		spin_for_ms(100);

		printk("before sleep\n");
		/* relinquish CPU and wait for each thread to complete */
		k_sleep(K_TICKS(slice_ticks * (NUM_THREAD + 1)));

		for (int i = 0; i < NUM_THREAD; i++) {
			k_sem_take(&sema, K_FOREVER);
		}

		/* test case teardown */
		for (int i = 0; i < NUM_THREAD; i++) {
			k_thread_abort(tid[i]);
		}
		/* disable time slice */
		k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));
	}
	k_thread_priority_set(k_current_get(), old_prio);
}

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-27 16:55:58 -05:00
Andrew Boie
14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
6c97ab3167 mmu: promote public APIs
These are application facing and are prefixed with k_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
c7be5dddda mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
60d306642e kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
611b626b39 mmu: pin the whole kernel
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
a5cb878144 kernel: add demand paging implementation
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
431b7c0fe5 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
a6eca9fab6 kernel: add demand paging arch interfaces
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
ecb25fec51 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
299a2cf62e mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
5db615bb38 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
8ccec8eba6 kernel: add k_mem_map() interface
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
e35f179db3 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
73a3e05e40 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00