Commit graph

2503 commits

Author SHA1 Message Date
Daniel Leung
d1495e98e2 kernel: fix arch_mem_coherent() call in spinlock
The call to arch_mem_coherent() inside spinlock.h
when spinlock validation and memory coherence enabled
is causing build error as spinlock.h does not include
kernel_arch_func.h directly. However, simply including
that file does not work either as this creates
the chicken-or-egg in the chain of include files.
In order to make spin validation work with kernel
coherence enabled, a separate function is created
to break the circular dependencies of include files.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-03 10:42:04 -05:00
Daniel Leung
079bc64c16 kernel: fix _kernel argument to arch_mem_coherent
Argument to arch_mem_coherent() is a pointer so pass
a pointer to _kernel.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-03 10:42:04 -05:00
Andy Ross
887e1abace kernel/timeout: Fix timeout "sooner" computation
There was an edge case in the timeout handling (exposed by, but not
strictly related to, the recent timeslice fix): the next_timeout()
computation would include time slice expiration as a clamp on the
result, but this would be invoked also on the z_set_timeout_expiry()
path which gets hooked on entry to a new thread which is needed to set
the timeout in the first place.  So if no other timer interrupt was
scheduled, it was possible to miss the first timeslice interrupt after
thread scheduling.

The explanation is much longer than the fix (use <= as the comparator
instead of <).

In practice this was only being hit in the existing test suite on
riscv miv running under renode using non-default clock rates.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-02 17:58:40 -05:00
Andy Ross
544475d8a7 kernel/timeout: Schedule zero-time timeouts
Fix an edge case that snuck in with the recent fix: if timeslicing is
enabled, the CPU's slice_ticks will be zero, and thus match a timeout
object's dticks value of zero, and thus get suppressed (because "we
already have a timeout scheduled for that") incorrectly.

Fixes #31789

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-02 17:58:40 -05:00
Alexandre Bourdiol
8925af94f2 kernel: Kconfig: increase test default MAIN_STACK_SIZE for ARM Cortex M
There are more and more tests that fail due to stackoverflow.
Increasing MAIN_STACK_SIZE to fix those issues.

Signed-off-by: Alexandre Bourdiol <alexandre.bourdiol@st.com>
2021-02-02 10:05:46 -05:00
Flavio Ceolin
148769c715 sched: timeout: Do not miss slice timeouts
Time slices don't have a timeout struct associated and stored in
timeout_list. Time slice timeout is direct programmed in the system
clock and tracked in _current_cpu->slice_ticks.

There is one issue where the time slice timeout can be missed because
the system clock is re-programmed to a longer timeout. To this happens,
it is only necessary that the timeout_list is empty (any timeout set)
and a new timeout longer than remaining time slice is set. This is cause
because z_add_timeout does not check for the slice ticks.

The following example spots the issue:

K_THREAD_STACK_DEFINE(tstack, STACK_SIZE);
K_THREAD_STACK_ARRAY_DEFINE(tstacks, NUM_THREAD, STACK_SIZE);
K_SEM_DEFINE(sema, 0, NUM_THREAD);

static inline void spin_for_ms(int ms)
{
	uint32_t t32 = k_uptime_get_32();

	while (k_uptime_get_32() - t32 < ms) {
	}
}

static void thread_time_slice(void *p1, void *p2, void *p3)
{
	printk("thread[%d] - Before spin\n", (int)(uintptr_t)p1);

	/* Spinning for longer than slice */
	spin_for_ms(SLICE_SIZE + 20);

	/* The following print should not happen before another
	 * same priority thread starts.
	 */
	printk("thread[%d] - After spinning\n", (int)(uintptr_t)p1);
	k_sem_give(&sema);
}

void main(void)
{
	k_tid_t tid[NUM_THREAD];
	struct k_thread t[NUM_THREAD];
	uint32_t slice_ticks = k_ms_to_ticks_ceil32(SLICE_SIZE);
	int old_prio = k_thread_priority_get(k_current_get());

	/* disable timeslice */
	k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));

	for (int j = 0; j < 2; j++) {
		k_sem_reset(&sema);

		/* update priority for current thread */
		k_thread_priority_set(k_current_get(), K_PRIO_PREEMPT(j));

		/* synchronize to tick boundary */
		k_usleep(1);

		/* create delayed threads with equal preemptive priority */
		for (int i = 0; i < NUM_THREAD; i++) {
			tid[i] = k_thread_create(&t[i], tstacks[i], STACK_SIZE,
						 thread_time_slice, (void *)i, NULL,
						 NULL, K_PRIO_PREEMPT(j), 0,
						 K_NO_WAIT);
		}

		/* enable time slice (and reset the counter!) */
		k_sched_time_slice_set(SLICE_SIZE, K_PRIO_PREEMPT(0));

		/* Spins for while to spend this thread time but not longer */
		/* than a slice. This is important  */
		spin_for_ms(100);

		printk("before sleep\n");
		/* relinquish CPU and wait for each thread to complete */
		k_sleep(K_TICKS(slice_ticks * (NUM_THREAD + 1)));

		for (int i = 0; i < NUM_THREAD; i++) {
			k_sem_take(&sema, K_FOREVER);
		}

		/* test case teardown */
		for (int i = 0; i < NUM_THREAD; i++) {
			k_thread_abort(tid[i]);
		}
		/* disable time slice */
		k_sched_time_slice_set(0, K_PRIO_PREEMPT(0));
	}
	k_thread_priority_set(k_current_get(), old_prio);
}

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-27 16:55:58 -05:00
Andrew Boie
14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
6c97ab3167 mmu: promote public APIs
These are application facing and are prefixed with k_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
c7be5dddda mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
60d306642e kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
611b626b39 mmu: pin the whole kernel
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
a5cb878144 kernel: add demand paging implementation
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
431b7c0fe5 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
a6eca9fab6 kernel: add demand paging arch interfaces
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
ecb25fec51 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
299a2cf62e mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
5db615bb38 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
8ccec8eba6 kernel: add k_mem_map() interface
Allows applications to increase the data space available to Zephyr
via anonymous memory mappings. Loosely based on mmap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
e35f179db3 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
73a3e05e40 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Peter Bigot
affa7a1c7e Revert "device: add post-process of elf file to manage device handles"
This reverts commit 40d3653758.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-01-23 18:01:03 -05:00
Nicolas Pitre
a2011d8af9 z_heap_aligned_alloc(): avoid memory wastage
The strategy used in z_heap_aligned_alloc() was to allocate an extra
align-sized memory block for storing a pointer to the memory heap.
This is wasteful in terms of memory usage when alignment is larger
than a pointer width. A loop is needed to find the initial memory
start when freeing it which isn't optimal either.

Instead, let's have sys_heap_aligned_alloc() rewind a pointer after
it is aligned to make just enough room for storing our heap reference.
This way the heap reference is always located immediately before the
aligned memory and any unused memory is returned to the heap.

The rewind and alignment values may coincide in which case only
the alignment is necessary anyway.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-01-22 10:04:43 -05:00
Flavio Ceolin
d21cfd5f36 power: Remove power management conditionals from code
Remove conditionals (PM_DEEP_SLEEP_STATES and PM_SLEEP_STATES) from
power management code. Now these features are always available when
power management is enabled.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-22 09:31:20 -05:00
Flavio Ceolin
579f7049c7 power: Move pm subsystem to new power states
Migrate the whole pm subsystem to use new power states information
from power_state.h and get states and residency properties from
device tree.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-22 09:31:20 -05:00
Peter Bigot
0ab314f705 kernel: const-qualify objects used to calculate delay values
The internal API to measure time until a delay expires does not modify
the referenced timeout.  Make the functions that call it take pointers
to const objects, so that they can be used with pointer to
const-qualified containers.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-01-22 08:05:26 -06:00
Anas Nashif
db0732f11d Revert "kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES"
This reverts commit 9d2ebfff58.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
8e84eaf73e Revert "kernel: add page frame management"
This reverts commit 2ca5fb7e06.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
0417b97257 Revert "kernel: add k_mem_map() interface"
This reverts commit 69d39af5e6.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
6b82664a5a Revert "mmu: add k_mem_free_get()"
This reverts commit 9111ec2c19.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
a2ec139bf7 Revert "mmu: arch_mem_map() may no longer fail"
This reverts commit db56722729.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
d887e078f9 Revert "mmu: ensure gperf data is mapped"
This reverts commit e9bfd64110.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
65122b776a Revert "kernel: add demand paging arch interfaces"
This reverts commit b8ae437967.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
cd0beca292 Revert "kernel: add demand paging internal interfaces"
This reverts commit 3e51a7a775.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
752934b3c8 Revert "kernel: add demand paging implementation"
This reverts commit 2fe1fc53c8.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
75ebe4c7bd Revert "mmu: pin the whole kernel"
This reverts commit a45486e1d5.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
c2c87c99c7 Revert "kernel: add z_num_pagefaults_get()"
This reverts commit d7e6bc3e84.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
5e978d237c Revert "mmu: backing stores reserve page fault room"
This reverts commit 7a642f81ab.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
ef17f889dc Revert "mmu: promote public APIs"
This reverts commit 63fc93e21f.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Daniel Leung
d3218ca515 debug: coredump: remove z_ prefix for stuff used outside subsys
This removes the z_ prefix those (functions, enums, etc.) that
are being used outside the coredump subsys. This aligns better
with the naming convention.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-21 22:08:59 -05:00
Flavio Ceolin
d6d4a832a4 kernel: build: Make TICKLESS_KERNEL depends on TICKLESS_CAPABLE
Tickless kernel option depends on tickless capable being selected by
the target.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-21 17:20:32 -05:00
Andrew Boie
63fc93e21f mmu: promote public APIs
These are application facing and are prefixed with k_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
7a642f81ab mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
d7e6bc3e84 kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
a45486e1d5 mmu: pin the whole kernel
This will enable testing of the implementation until the
critical set of pages is identified and known to the
kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
2fe1fc53c8 kernel: add demand paging implementation
Implement runtime APIs for pinning, paging in, and evicting
memory, as well as the page fault hook called from architecture
code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
3e51a7a775 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
b8ae437967 kernel: add demand paging arch interfaces
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
e9bfd64110 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
db56722729 mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
9111ec2c19 mmu: add k_mem_free_get()
Return the amount of physical anonymous memory remaining.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00