Commit graph

463 commits

Author SHA1 Message Date
Daniel Leung dbc0be487f kernel: use proper macro to declare extern interrupt stacks
The z_interrupt_stacks was declared extern in the kernel internal
header file using the same macro which defines the same stack
array but with an added "extern" in front. This macro adds
alignment and section attribute which are actually not the same
as the actual stack array defined in kernel/init.c. The section
name used in the section attribute contains the file name where
the stack array is defined or extern declared. So the same
symbol, in this case z_interrupt_stacks, has different
attributes in two places, and GCC 11 starts to complain about
this. So use the newly introduced macro to extern declare
the stack array without adding/replacing any symbol attributes.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-07-22 07:24:11 -05:00
Daniel Leung dfa4b7e375 kernel: mmu: z_backing_store* to k_mem_paging_backing_store*
These functions are those that need be implemented by backing
store outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Daniel Leung 31c362d966 kernel: mmu: rename z_eviction* to k_mem_paging_eviction*
These functions and data structures are those that need
to be implemented by eviction algorithm and application
outside kernel. Promote them from z_* so these can be
included in documentation.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-28 11:33:22 -04:00
Andy Ross 851d14afc8 kernel/sched: Remove "cooperative scheduling only" special cases
The scheduler has historically had an API where an application can
inform the kernel that it will never create a thread that can be
preempted, and the kernel and architecture layer would use that as an
optimization hint to eliminate some code paths.

Those optimizations have dwindled to almost nothing at this point, and
they're now objectively a smaller impact than the special casing that
was required to handle the idle thread (which, obviously, must always
be preemptible).

Fix this by eliminating the idea of "cooperative only" and ensuring
that there will always be at least one preemptible priority with value
>=0.  CONFIG_NUM_PREEMPT_PRIORITIES now specifies the number of
user-accessible priorities other than the idle thread.

The only remaining workaround is that some older architectures (and
also SPARC) use the CONFIG_PREEMPT_ENABLED=n state as a hint to skip
thread switching on interrupt exit.  So detect exactly those platforms
and implement a minimal workaround in the idle loop (basically "just
call swap()") instead, with a big explanation.

Note that this also fixes a bug in one of the philosophers samples,
where it would ask for 6 cooperative priorities but then use values -7
through -2.  It was assuming the kernel would magically create a
cooperative priority for its idle thread, which wasn't correct even
before.

Fixes #34584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-24 23:38:16 -04:00
Andy Ross bd077561d0 kernel/swap: Add assertion to catch lock-breaking context switches
Our z_swap() API takes a key returned from arch_irq_lock() and
releases it atomically with the context switch.  Make sure that the
action of the unlocking is to unmask interrupts globally.  If
interrupts would still be masked then that means there is an OUTER
interrupt lock still held, and the code that locked it surely doesn't
expect the thread to be suspended and interrupts unmasked while it's
held!

Unfortunately, this kind of mistake is very easy to make.  We should
catch that with a simple assertion.  This is essentially a crude
Zephyr equivalent of the extremely common "BUG: scheduling while
atomic" error in Linux drivers (just google it).

The one exception made is the circumstance where a thread has already
aborted itself.  At that stage, whatever upthread lock state might
have existed will have already been messed up, so there's no value in
our asserting here.  We can't catch all bugs, and this can actually
happen in error handling and/or test frameworks.

Fixes #33319

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-05-17 15:27:37 -04:00
Daniel Leung 1310ad6b0e linker: add bits for pinned regions
This adds the necessary bits for linker scripts and source code
to specify which symbols need to be pinned in memory. This is
needed for demand paging as some functions and data must reside
in memory all the time and cannot be paged out (e.g. paging,
scheduler, and interrupt routines for functionality).

This is up to the arch/SoC/board to define the sections in
their linker scripts as the pinned section may need special
alignment which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Daniel Leung d812728ec4 linker: add bits for boot regions
This adds the necessary bits for linker scripts and source code
to specify which symbols are needed for boot process so they
can be grouped together.

One use of this is to group boot related code and data so these
won't interval with other kernel and application for better
caching.

This is a must for demand paging as some functions and data
must be available during the boot process and before the memory
manager is initialized. During this time, paging cannot be used
so symbols linked in virtual memory space are unavailable.

This is up to the arch/SoC/board to define the sections in
their linker scripts as section may need special alignment
which cannot be done in common script snippets.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-10 16:00:43 -05:00
Torbjörn Leksell f17144349b Tracing: Thread tracing
Add thread tracing hooks, default hooks, and documentation.

Signed-off-by: Torbjörn Leksell <torbjorn.leksell@percepio.com>
2021-05-07 22:10:21 -04:00
Daniel Leung 085d3768e1 kernel: mmu: introduce arch_page_phys_get()
This adds a new function prototype for arch_page_phys_get()
which will be used to translate mapped virtual addresses back
to physical memory addresses. This is needed for the future
k_mem_unmap() function which requires this to find
the corresponding page frame. It is faster to look through
the page tables instead of doing linear search of the page
frame array.

A weak function is provided in case arch_page_phys_get()
is not implemented at the arch level. This simply goes
through all the page frame and find the one which has
mapped to the virtual address.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Jennifer Williams ca75bbef3c tests: boot_time: remove all the code and instrumentation feeding into test
Remove the config BOOT_TIME_MEASUREMENT and corresponding #ifdef'd code
throughout (kernel/init.c, idle.c, core/common.S , reset.S, ... ) which
hold the extern hooks for z_timestamp_main and z_timestamp_idle in the
removed boot_time test suite.

Signed-off-by: Jennifer Williams <jennifer.m.williams@intel.com>
2021-05-05 10:41:15 -04:00
Daniel Leung 1117169980 kernel: generate placeholders for kobj tables before final build
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-27 13:32:00 -04:00
Nicolas Pitre f97d12936e kernel: add an architecture specific structs header
Add the ability to define architecture specific structures, notably
the ability to extend struct _cpu with per-CPU arch-specific stuff that
can be accessed with _current_cpu->arch.* similarly to _current->arch.*
for per-thead architecture data.

This is opt-in for architectures that want to benefit from this,
otherwise empty defaults are provided. A placeholder for ARM64 is
included to show the pattern.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-04-21 09:03:47 -04:00
Carlo Caione 64dfa69681 aarch64: Remove useless _curr_cpu struct
Currently _curr_cpu is only used by the get_cpu macro to quickly access
the cpu struct. This is not really necessary because we can access to
the struct by directly referencing &(_kernel.cpus[cpu_num]) in assembly

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-04-09 08:10:10 -04:00
Daniel Leung 8eea5119d7 kernel: mmu: demand paging execution time histogram
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Anas Nashif 25c87db860 kernel/arch: cleanup function definitions
make identifiers used in the declaration and definition identical. This
is based on MISRA rule 8.3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif bbbc38ba8f kernel: Make both operands of operators of same essential type category
Add a 'U' suffix to values when computing and comparing against
unsigned variables and other related fixes of the same MISRA rule (10.4)

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-04-01 05:34:17 -04:00
Anas Nashif d8f698703b kernel: idle/z_sched_prio_cmp: match implementation to prototype
The identifiers used in the declaration and definition of a function
shall be identical [MISRAC2012-RULE_8_3-b]

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-03-29 07:52:42 -04:00
Katsuhiro Suzuki 59903e2934 kernel: arch: introduce k_float_enable()
This patch introduce new API to enable FPU of thread. This is pair of
existed k_float_disable() API. And also add empty arch_float_enable()
into each architectures that have arch_float_disable(). The arc and
riscv already implemented arch_float_enable() so I do not touch
these implementations.

Motivation: Current Zephyr implementation does not allow to use FPU
on main and other system threads like as work queue. Users need to
create an other thread with K_FP_REGS for floating point programs.
Users can use FPU more easily if they can enable FPU on running
threads.

Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net>
2021-03-25 14:13:23 +01:00
Enjia Mai 4aed856d7f kernel: smp: Remove unused internal API z_smp_reacquire_global_lock()
The internal function z_smp_reacquire_global_lock() has not used by
anywhere inside zephyr code, so remove it.

Fixes #33273.

Signed-off-by: Enjia Mai <enjiax.mai@intel.com>
2021-03-14 18:32:26 -04:00
Andy Ross deca2301f6 kernel/swap: Move arch_cohere_stacks() back under the lock
Commit 6b84ab3830 ("kernel/sched: Adjust locking in z_swap()") moved
the call to arch_cohere_stacks() out of the scheduler lock while doing
some reorgnizing.  On further reflection, this is incorrect.  When
done outside the lock, the two arch_cohere_stacks() calls will race
against each other.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Flavio Ceolin 9b246aba78 power: Make pm_system_resume private
This API is not intended to be public and it is called only from the
idle thread.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-07 07:59:53 -05:00
Flavio Ceolin 10f29359d7 power: Make pm_system_suspend private
pm_system_suspend is called only from the idle thread and should
not be exported as a public API.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-03-07 07:59:53 -05:00
Spoorthy Priya Yerabolu 4118ed1d4d kernel: sched: removing dead code
Due to the recent changes to scheduler z_find_first_thread_to_unpend
& z_remove_thread_from_ready_q are not used anymore. So removing the
dead code.

fixes: #32691

Signed-off-by: Spoorthy Priya Yerabolu <spoorthy.priya.yerabolu@intel.com>
2021-03-05 11:05:25 +03:00
Peter Bigot 0259c864df kernel: add private scheduler APIs
These functions are a subset of proposed public APIs to clean up
several issues related to safely handling waking of threads.  They
have been made private as they interface may change, but their use
will simplify the reimplementation of the k_work functionality.

See: https://github.com/zephyrproject-rtos/zephyr/pull/29668

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-03-03 20:06:00 -05:00
James Harris 2cd0f66515 kernel: sched: change to 3-way thread priority comparison
`z_is_t1_higher_prio_than_t2` was being called twice in both the
context-switch fastpath and in `z_priq_rb_lessthan`, just to
dealing with priority ties. In addition, the API was error-prone
(and too much in the fastpath to be able to assert its invarients)
- see also #32710 for a previous example of this API breaking
and returning a>b but also b>a.

Replacing this with a direct 3-way comparison `z_cmp_t1_prio_with_t2`
sidesteps most of these issues. There is still a concern that
`sgn(z_cmp_t1_prio_with_t2(a,b)) != -sgn(z_cmp_t1_prio_with_t2(b,a))`
but I don't see any way to alleviate this aside from adding an
assert to the fastpath.

Signed-off-by: James Harris <james.harris@intel.com>
2021-03-02 14:27:14 -05:00
Andy Ross 6fb6d3cfbe kernel: Add new k_thread_abort()/k_thread_join()
Add a newer, much smaller and simpler implementation of abort and
join.  No need to involve the idle thread.  No need for a special code
path for self-abort.  Joining a thread and waiting for an aborting one
to terminate elsewhere share an implementation.  All work in both
calls happens under a single locked path with no unexpected
synchronization points.

This fixes a bug with the current implementation where the action of
z_sched_single_abort() was nonatomic, releasing the lock internally at
a point where the thread to be aborted could self-abort and confuse
the state such that it failed to abort at all.

Note that the arm32 and native_posix architectures, which have their
own thread abort implementations, now see a much simplified
"z_thread_abort()" internal API.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andy Ross 6b84ab3830 kernel/sched: Adjust locking in z_swap()
Swap was originally written to use the scheduler lock just to select a
new thread, but it would be nice to be able to rely on scheduler
atomicity later in the process (in particular it would be nice if the
assignment to cpu.current could be seen atomically).  Rework the code
a bit so that swap takes the lock itself and holds it until just
before the call to arch_switch().

Note that the local interrupt mask has always been required to be held
across the swap, so extending the lock here has no effect on latency
at all on uniprocessor setups, and even on SMP only affects average
latency and not worst case.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-24 16:39:15 -05:00
Andrei Emeltchenko 377456c5af kernel: Move LOCKED() macro to kernel_internal.h
Remove duplication in the code by moving macro LOCKED() to the correct
kernel_internal.h header.

Signed-off-by: Andrei Emeltchenko <andrei.emeltchenko@intel.com>
2021-02-22 14:56:37 -05:00
Daniel Leung ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung ec21c0b92f kernel: mmu: fix boot address translation macros
The Z_BOOT_VIRT_TO_PHYS() and Z_BOOT_PHYS_TO_VIRT() address
translation macros are flipped in their calculations.
The calculation is supposed to be:

  virt = phys + ((KERNEL_VM_BASE + KERNEL_VM_OFFSET) -
                 SRAM_BASE_ADDRESS)

So fix the them.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Peter Bigot d554d34137 device: add post-process of elf file to manage device handles
Following the idiom used for system calls, add script support to read
the initial application binary to identify which devices are defined,
and to use their offset in the device array as their unique handle
rather than the externally-defined ordinal from devicetree.  The
device dependency arrays are updated to use these handles.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 15:46:16 -05:00
Peter Bigot 1cadd8b305 device: perform dynamic device initialization during system startup
Initialize all device objects in a batch before invoking any code that
might try to reference data in them.  This eliminates a race condition
enabled by the ability to resolve a device structure at build time,
and reference it from one device's initialization routine before the
device itself has been initialized.

While the device is pulled from the sys_init records rather than
static devices, all in-tree init_entry records that are associated
with devices are produced via Z_DEVICE_DEFINE(), so there should be no
static devices that would be missed by instead iterating over the
device records.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-02-19 10:11:20 -05:00
Andy Ross c7d0cb6641 include/kernel_arch_interface.h: Redocument arch_switch()
Some recent changes exposed some common "arch_switch() anti-patterns"
in various architectures.  The documentation technically described
this all correctly, but probably wasn't as clear as it should have
been.  Rewrite, making clear exactly what needs to happen and how the
fields should be interpreted.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross 4ff457113e kernel/sched: Fix rare SMP deadlock
It was possible with pathological timing (see below) for the scheduler
to pick a cycle of threads on each CPU and enter the context switch
path on all of them simultaneously.

Example:
   * CPU0 is idle, CPU1 is running thread A
   * CPU1 makes high priority thread B runnable
   * CPU1 reaches a schedule point (or returns from an interrupt) and
     decides to run thread B instead
   * CPU0 simultaneously takes its IPI and returns, selecting thread A

Now both CPUs enter wait_for_switch() to spin, waiting for the context
switch code on the other thread to finish and mark the thread
runnable.  So we have a deadlock, each CPU is spinning waiting for the
other!

Actually, in practice this seems not to happen on existing hardware
platforms, it's only exercisable in emulation.  The reason is that the
hardware IPI time is much faster than the software paths required to
reach a schedule point or interrupt exit, so CPU1 always selects the
newly scheduled thread and no deadlock appears.  I tried for a bit to
make this happen with a cycle of three threads, but it's complicated
to get right and I still couldn't get the timing to hit correctly.  In
qemu, though, the IPI is implemented as a Unix signal sent to the
thread running the other CPU, which is far slower and opens the window
to see this happen.

The solution is simple enough: don't store the _current thread in the
run queue until we are on the tail end of the context switch path,
after wait_for_switch() and going to reach the end in guaranteed time.

Note that this requires changing a little logic to handle the yield
case: because we can no longer rely on _current's position in the run
queue to suppress it, we need to do the priority comparison directly
based on the existing "swap_ok" flag (which has always meant
"yielded", and maybe should be renamed).

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross 91946ef21c kernel/sched: Refactor, unify management of QUEUED state
The QUEUED state flag was managed separately from the run queue
insertion/deletion, and the logic (while AFAICT perfectly correct) was
tangled in a few places trying to keep them in sync.  Put the
management of both behind a queue_thread()/dequeue_thread() API for
clarity.  The ALWAYS_INLINE usage seems to be working to get the
compiler to condense the resulting multiple assignments.  No behavior
change.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross dd43221540 kernel/sched: Fix race with switch handle
The "null out the switch handle and put it back" code in the swap
implementation is a holdover from some defensive coding (not wanting
to break the case where we picked our current thread), but it hides a
subtle SMP race: when that field goes NULL, another CPU that may have
selected that thread (which is to say, our current thread) as its next
to run will be spinning on that to detect when the field goes
non-NULL.  So it will get the signal to move on when we revert the
value, when clearly we are still running on the stack!

In practice this was found on x86 which poisons the switch context
such that it crashes instantly.

Instead, be firm about state and always set the switch handle of a
currently running thread to NULL immediately before it starts running:
right before entering arch_switch() and symmetrically on the interrupt
exit path.

Fixes #28105

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-14 16:22:45 -05:00
Andy Ross 1d51e888d8 kernel/z_swap: Remove on-stack dummy spinlock
The z_swap_unlocked() function used a dummy spinlock for simplicity.
But this runs afouls of checking for stack-resident spinlocks
(forbidden when KERNEL_COHERENCE is set).  And it's executing needless
code to release the lock anyway.  Replace with a compile time NULL,
which will improve performance, correctness and code size.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-11 14:47:40 -05:00
Andy Ross 604f0f44b6 kernel/sched: Add missing lock around waitq unpend calls
The two calls to unpend a thread from a wait queue were inexplicably*
unsynchronized, as James Harris discovered.  Rework them to call the
lowest level primities so we can wrap the process inside the scheduler
lock.

Fixes #32136

* I took a brief look.  What seems to have happened here is that these
  were originally synchronized via an implicit from an outer caller
  (remember the original Uniprocessor irq_lock() API is a recursive
  lock), and they were mostly implemented in terms of middle-level
  calls that were themselves locked.  So those got ported over to the
  newer spinlock but the outer wrapper layer got forgotten.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-02-10 07:43:18 -05:00
Daniel Leung 371752bce3 kernel: tls: align tdata/tbss sections in stack
This lets the linker tell us what kind of alignment is required
for both tdata and tbss data when copying them into stack.
If they are not aligned as expected by the toolchain, generated
code would be accessing incorrect location for thread variables.

Fixes #32015

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-07 23:28:43 -05:00
Nicolas Pitre f9461d1ac4 mmu: fix ARM64 compilation by removing z_mapped_size usage
The linker script defines `z_mapped_size` as follows:

```
	z_mapped_size = z_mapped_end - z_mapped_start;
```

This is done with the belief that precomputed values at link time will
make the code smaller and faster.

On Aarch64, symbol values are relocated and loaded relative to the PC
as those are normally meant to be memory addresses.

Now if you have e.g. `CONFIG_SRAM_BASE_ADDRESS=0x2000000000` then
`z_mapped_size` might still have a reasonable value, say 0x59334.
But, when interpreted as an address, that's very very far from the PC
whose value is in the neighborhood of 0x2000000000. That overflows the
4GB relocation range:

```
kernel/libkernel.a(mmu.c.obj): in function `z_mem_manage_init':
kernel/mmu.c:527:(.text.z_mem_manage_init+0x1c):
relocation truncated to fit: R_AARCH64_ADR_PREL_PG_HI21
```

The solution is to define `Z_KERNEL_VIRT_SIZE` in terms of
`z_mapped_end - z_mapped_start` at the source code level. Given this
is used within loops that already start with `z_mapped_start` anyway,
the compiler is smart enough to combine the two occurrences and
dispense with a size counter, making the code effectively
slightly better for all while avoiding the Aarch64 relocation
overflow:

```
   text    data     bss     dec     hex filename
   1216       8  294936  296160   484e0 mmu.c.obj.arm64.before
   1212       8  294936  296156   484dc mmu.c.obj.arm64.after
   1110       8    9244   10362    287a mmu.c.obj.x86-64.before
   1106       8    9244   10358    2876 mmu.c.obj.x86-64.after
```

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-02-05 17:19:56 -05:00
Andrew Boie 14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie c7be5dddda mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 60d306642e kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 431b7c0fe5 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie a6eca9fab6 kernel: add demand paging arch interfaces
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie ecb25fec51 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 299a2cf62e mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie e35f179db3 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie 73a3e05e40 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Peter Bigot affa7a1c7e Revert "device: add post-process of elf file to manage device handles"
This reverts commit 40d3653758.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-01-23 18:01:03 -05:00
Anas Nashif db0732f11d Revert "kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES"
This reverts commit 9d2ebfff58.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 8e84eaf73e Revert "kernel: add page frame management"
This reverts commit 2ca5fb7e06.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif a2ec139bf7 Revert "mmu: arch_mem_map() may no longer fail"
This reverts commit db56722729.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif d887e078f9 Revert "mmu: ensure gperf data is mapped"
This reverts commit e9bfd64110.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 65122b776a Revert "kernel: add demand paging arch interfaces"
This reverts commit b8ae437967.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif cd0beca292 Revert "kernel: add demand paging internal interfaces"
This reverts commit 3e51a7a775.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif c2c87c99c7 Revert "kernel: add z_num_pagefaults_get()"
This reverts commit d7e6bc3e84.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif 5e978d237c Revert "mmu: backing stores reserve page fault room"
This reverts commit 7a642f81ab.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Andrew Boie 7a642f81ab mmu: backing stores reserve page fault room
If we evict enough pages to completely fill the backing store,
through APIs like k_mem_map(), z_page_frame_evict(), or
z_mem_page_out(), this will produce a crash the next time we
try to handle a page fault.

The backing store now always reserves a free storage location
for actual page faults.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie d7e6bc3e84 kernel: add z_num_pagefaults_get()
Simple counter of number of successfully handled page faults by
the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 3e51a7a775 kernel: add demand paging internal interfaces
APIs used by backing store and eviction algorithms.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie b8ae437967 kernel: add demand paging arch interfaces
Architecture layer hooks for demand paging. See
doxygen for these API definitions for more details.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie e9bfd64110 mmu: ensure gperf data is mapped
Page tables created at build time may not include the
gperf data at the very end of RAM. Ensure this is mapped
properly at runtime to work around this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie db56722729 mmu: arch_mem_map() may no longer fail
Pre-allocation of paging structures is now required, such that
no allocations are ever needed when mapping memory.

Instantiation of new memory domains may still require allocations
unless a common page table is used.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 2ca5fb7e06 kernel: add page frame management
Initialize the page frame ontology at boot and update it
when we do memory mappings.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie 9d2ebfff58 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Peter Bigot 40d3653758 device: add post-process of elf file to manage device handles
Following the idiom used for system calls, add script support to read
the initial application binary to identify which devices are defined,
and to use their offset in the device array as their unique handle
rather than the externally-defined ordinal from devicetree.  The
device dependency arrays are updated to use these handles.

Signed-off-by: Peter Bigot <peter.bigot@nordicsemi.no>
2021-01-21 14:49:04 -06:00
Daniel Leung 0c9f9691c4 kernel: mempool: add z_thread_aligned_alloc
This adds a new z_thread_aligned_alloc() to do memory allocation
with required alignment.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-13 09:43:55 -08:00
Andrew Boie d2ad783a97 mmu: rename z_mem_map to z_phys_map
Renamed to make its semantics clearer; this function maps
*physical* memory addresses and is not equivalent to
posix mmap(), which might confuse people.

mem_map test case remains the same name as other memory
mapping scenarios will be added in the fullness of time.

Parameter names to z_phys_map adjusted slightly to be more
consistent with names used in other memory mapping functions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-12-16 08:55:55 -05:00
Anas Nashif dd931f93a2 power: standarize PM Kconfigs and cleanup
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE

and use PM_ as the prefix for all PM related Kconfigs

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Carlo Caione a7d94b003e aarch64: Use absolute symbols for the callee saved registers
Use GEN_OFFSET_SYM macro to genarate absolute symbols for the
_callee_saved struct and use these new symbols in the assembly code.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-17 18:59:23 -05:00
Daniel Leung 11e6b43090 tracing: roll thread switch in/out into thread stats functions
Since the tracing of thread being switched in/out has the same
instrumentation points, we can roll the tracing function calls
into the one for thread stats gathering functions.
This avoids duplicating code to call another function.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Daniel Leung fc577c4bd1 kernel: gather basic thread runtime statistics
This adds the bits to gather the first thread runtime statictic:
thread execution time. It provides a rough idea of how much time
a thread is spent in active execution. Currently it is not being
used, pending following commits where it combines with the trace
points on context switch as they instrument the same locations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-11-11 23:55:49 -05:00
Daniel Leung 02b20351cd kernel: add common bits to support TLS
This adds the common struct fields and functions to support
the implementation of thread local storage in individual
architecture. This uses the thread stack to store TLS data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-10-24 10:52:00 -07:00
Andrew Boie f5c3fc498b kernel: add arch_mem_unmap() interface
The core kernel does not use this yet, but it will be later used
as part of infrastructure for memory-mapping stacks, as detailed
in #28899.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-23 22:02:47 -04:00
Anas Nashif bf69afcdae kernel: only resume suspended threads
Do not try to resume a thread that was not suspended.

Fixes #28694

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-22 07:00:15 -04:00
Andy Ross f6d32ab0a4 kernel: Add cache coherence management framework
Zephyr SMP kernels need to be able to run on architectures with
incoherent caches.  Naive implementation of synchronization on such
architectures requires extensive cache flushing (e.g. flush+invalidate
everything on every spin lock operation, flush on every unlock!) and
is a performance problem.

Instead, many of these systems will have access to separate "coherent"
(usually uncached) and "incoherent" regions of memory.  Where this is
available, place all writable data sections by default into the
coherent region.  An "__incoherent" attribute flag is defined for data
regions that are known to be CPU-local and which should use the cache.
By default, this is used for stack memory.

Stack memory will be incoherent by default, as by definition it is
local to its current thread.  This requires special cache management
on context switch, so an arch API has been added for that.

Also, when enabled, add assertions to strategic places to ensure that
shared kernel data is indeed coherent.  We check thread objects, the
_kernel struct, waitq's, timeouts and spinlocks.  In practice almost
all kernel synchronization is built on top of these structures, and
any shared data structs will contain at least one of them.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-10-21 06:38:53 -04:00
Andrew Boie 348a0fda62 userspace: make mem domain lock non-static
Strictly speaking, any access to a mem domain or its
containing partitions should be serialized on this lock.

Architecture code may need to grab this lock if it is
using this data during, for example, context switches,
especially if they support SMP as locking interrupts
is not enough.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
Andrew Boie b5a71f74a8 userspace: remove threads from domain on abort
When threads exited we were leaving dangling references to
them in the domain's mem_domain_q.

z_thread_single_abort() now calls into the memory domain
code via z_mem_domain_exit_thread() to take it off.

The thread setup code now invokes z_mem_domain_init_thread(),
avoiding extra checks in k_mem_domain_add_thread(), we know
the object isn't currently a member of a doamin.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-10-20 09:37:49 -07:00
Aastha Grover 83b9f69755 code-guideline: Fixing code violation 10.4 Rule
Both operands of an operator in the arithmetic conversions
performed shall have the same essential type category.

Changes are related to converting the integer constants to the
unsigned integer constants

Signed-off-by: Aastha Grover <aastha.grover@intel.com>
2020-10-01 17:13:29 -04:00
Andrew Boie f5a7e1a108 kernel: handle thread self-aborts on idle thread
Fixes races where threads on another CPU are joining the
exiting thread, since it could still be running when
the joiners wake up on a different CPU.

Fixes problems where the thread object is still being
used by the kernel when the fn_abort() function is called,
preventing the thread object from being recycled or
freed back to a slab pool.

Fixes a race where a thread is aborted from one CPU while
it self-aborts on another CPU, that was currently worked
around with a busy-wait.

Precedent for doing this comes from FreeRTOS, which also
performs final thread cleanup in the idle thread.

Some logic in z_thread_single_abort() rearranged such that
when we release sched_spinlock, the thread object pointer
is never dereferenced by the kernel again; join waiters
or fn_abort() logic may free it immediately.

An assertion added to z_thread_single_abort() to ensure
it never gets called with thread == _current outside of an ISR.

Some logic has been added to ensure z_thread_single_abort()
tasks don't run more than once.

Fixes: #26486
Related to: #23063 #23062

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-30 14:11:59 -04:00
Anas Nashif 6e27478c3d benchmarking: remove execution benchmarking code
This code had one purpose only, feed timing information into a test and
was not used by anything else. The custom trace points unfortunatly were
not accurate and this test was delivering informatin that conflicted
with other tests we have due to placement of such trace points in the
architecture and kernel code.

For such measurements we are planning to use the tracing functionality
in a special mode that would be used for metrics without polluting the
architecture and kernel code with additional tracing and timing code.

Furthermore, much of the assembly code used had issues.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-09-05 13:28:38 -05:00
Watson Zeng 1dddbecb35 tracing: swap: bug fix and enhancement for ARC
* Move switched_in into the arch context switch assembly code,
  which will correctly record the switched_in information.

* Add switched_in/switched_out for context switch in irq exit.

Signed-off-by: Watson Zeng <zhiwei@synopsys.com>
2020-09-03 21:54:15 +02:00
Andrew Boie 5e0b55c30e kernel: demote k_mem_map to z_mem_map
Memory mapping, for now, will be a private kernel API
and is not intended to be application-facing at this time.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-03 14:24:38 -04:00
Flavio Ceolin 5408f3102d debug: x86: Add gdbstub for X86
It implements gdb remote protocol to talk with a host gdb during the
debug session. The implementation is divided in three layers:

1 - The top layer that is responsible for the gdb remote protocol.
2 - An architecture specific layer responsible to write/read registers,
    set breakpoints, handle exceptions, ...
3 - A transport layer to be used to communicate with the host

The communication with GDB in the host is synchronous and the systems
stops execution waiting for instructions and return its execution after
a "continue" or "step" command. The protocol has an exception that is
when the host sends a packet to cause an interruption, usually triggered
by a Ctrl-C. This implementation ignores this instruction though.

This initial work supports only X86 using uart as backend.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2020-09-02 20:54:57 -04:00
Andrew Boie ffc1da08f9 kernel: add z_thread_single_abort to private hdr
We shouldn't be copy-pasting extern declarations like this.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-09-02 15:02:06 -07:00
Tomasz Bursztyka e18fcbba5a device: Const-ify all device driver instance pointers
Now that device_api attribute is unmodified at runtime, as well as all
the other attributes, it is possible to switch all device driver
instance to be constant.

A coccinelle rule is used for this:

@r_const_dev_1
  disable optional_qualifier
@
@@
-struct device *
+const struct device *

@r_const_dev_2
 disable optional_qualifier
@
@@
-struct device * const
+const struct device *

Fixes #27399

Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
2020-09-02 13:48:13 +02:00
Andrew Boie 9bfc8d82d0 userspace: introduce default memory domain
We make a policy change here: all threads are members of a
memory domain, never NULL. We introduce a default memory domain
for threads that haven't been assigned to or inherited another one.

Primary motivation for this change is better MMU support, as
one common configuration will be to maintain page tables at
the memory domain level.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-08-26 20:35:02 -04:00
Daniel Leung 49206a86ff debug/coredump: add a primitive coredump mechanism
This adds a very primitive coredump mechanism under subsys/debug
where during fatal error, register and memory content can be
dumped to coredump backend. One such backend utilizing log
module for output is included. Once the coredump log is converted
to a binary file, it can be used with the ELF output file as
inputs to an overly simplified implementation of a GDB server.
This GDB server can be attached via the target remote command of
GDB and will be serving register and memory content. This allows
using GDB to examine stack and memory where the fatal error
occurred.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2020-08-24 20:28:24 -04:00
Anas Nashif d1049dc258 tracing: swap: cleanup trace points and their location
Move tracing switched_in and switched_out to the architecture code and
remove duplications. This changes swap tracing for x86, xtensa.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-08-24 13:21:12 +02:00
Andrew Boie 8b4b0d6264 kernel: z_interrupt_stacks are now kernel stacks
This will save memory on many platforms that enable
user mode.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie 8ce260d8df kernel: introduce supervisor-only stacks
These stacks are appropriate for threads that run purely in
supervisor mode, and also as stacks for interrupt and exception
handling.

Two new arch defines are introduced:

- ARCH_KERNEL_STACK_GUARD_SIZE
- ARCH_KERNEL_STACK_OBJ_ALIGN

New public declaration macros:

- K_KERNEL_STACK_RESERVED
- K_KERNEL_STACK_EXTERN
- K_KERNEL_STACK_DEFINE
- K_KERNEL_STACK_ARRAY_DEFINE
- K_KERNEL_STACK_MEMBER
- K_KERNEL_STACK_SIZEOF

If user mode is not enabled, K_KERNEL_STACK_* and K_THREAD_STACK_*
are equivalent.

Separately generated privilege elevation stacks are now declared
like kernel stacks, removing the need for K_PRIVILEGE_STACK_ALIGN.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie e4cc84a537 kernel: update arch_switch_to_main_thread()
This now takes a stack pointer as an argument with TLS
and random offsets accounted for properly.

Based on #24467 authored by Flavio Ceolin.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie b0c155f3ca kernel: overhaul stack specification
The core kernel computes the initial stack pointer
for a thread, properly aligning it and subtracting out
any random offsets or thread-local storage areas.
arch_new_thread() no longer needs to make any calculations,
an initial stack frame may be placed at the bounds of
the new 'stack_ptr' parameter passed in. This parameter
replaces 'stack_size'.

thread->stack_info is now set before arch_new_thread()
is invoked, z_new_thread_init() has been removed.
The values populated may need to be adjusted on arches
which carve-out MPU guard space from the actual stack
buffer.

thread->stack_info now has a new member 'delta' which
indicates any offset applied for TLS or random offset.
It's used so the calculations don't need to be repeated
if the thread later drops to user mode.

CONFIG_INIT_STACKS logic is now performed inside
z_setup_new_thread(), before arch_new_thread() is called.

thread->stack_info is now defined as the canonical
user-accessible area within the stack object, including
random offsets and TLS. It will never include any
carved-out memory for MPU guards and must be updated at
runtime if guards are removed.

Available stack space is now optimized. Some arches may
need to significantly round up the buffer size to account
for page-level granularity or MPU power-of-two requirements.
This space is now accounted for and used by virtue of
the Z_THREAD_STACK_SIZE_ADJUST() call in z_setup_new_thread.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie aa464052ff arch_interface: remove CamelCase
Naming cleanup of this interface declaration.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie 62eb7d99dc arch_interface: remove unnecessary params
arch_new_thread() passes along the thread priority and option
flags, but these are already initialized in thread->base and
can be accessed there if needed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-30 21:11:14 -04:00
Andrew Boie 9ff148ac83 kernel: define arch_mem_map()
This is the low-level arch function to map a region into page
tables.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-07-17 11:38:18 +02:00
Anas Nashif 2c5d40437b kernel: logging: convert K_DEBUG to LOG_DBG
Move K_DEBUG to use LOG_DBG instead of plain printk.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-06-25 16:12:36 -05:00
Andrew Boie 4855eaa735 kernel: document arch_printk_char_out()
Used by very early console drivers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-06-17 09:20:55 +02:00
Kumar Gala a1b77fd589 zephyr: replace zephyr integer types with C99 types
git grep -l 'u\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/u\(8\|16\|32\|64\)_t/uint\1_t/g"
	git grep -l 's\(8\|16\|32\|64\)_t' | \
		xargs sed -i "s/s\(8\|16\|32\|64\)_t/int\1_t/g"

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2020-06-08 08:23:57 -05:00
Andrew Boie 468efadd47 kernel: simplify dummy thread implementation
- simplify dummy thread initialization to a kswap.h
  inline function

- use the same inline function for both early boot and
  SMP setup

- add a note on necessity of the dummy thread even if
  a custom swap to main is implemented

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-13 21:23:52 +02:00
Andrew Boie a203d21962 kernel: remove legacy fields in _kernel
UP should just use _kernel.cpus[0].

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-05-08 17:42:49 +02:00
Stephanos Ioannidis aaf93205bb kconfig: Rename CONFIG_FP_SHARING to CONFIG_FPU_SHARING
This commit renames the Kconfig `FP_SHARING` symbol to `FPU_SHARING`,
since this symbol specifically refers to the hardware FPU sharing
support by means of FPU context preservation, and the "FP" prefix is
not fully descriptive of that; leaving room for ambiguity.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-05-08 10:58:33 +02:00
Stephanos Ioannidis 0e6ede8929 kconfig: Rename CONFIG_FLOAT to CONFIG_FPU
This commit renames the Kconfig `FLOAT` symbol to `FPU`, since this
symbol only indicates that the hardware Floating Point Unit (FPU) is
used and does not imply and/or indicate the general availability of
toolchain-level floating point support (i.e. this symbol is not
selected when building for an FPU-less platform that supports floating
point operations through the toolchain-provided software floating point
library).

Moreover, given that the symbol that indicates the availability of FPU
is named `CPU_HAS_FPU`, it only makes sense to use "FPU" in the name of
the symbol that enables the FPU.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2020-04-27 19:03:44 +02:00
Andrew Boie c0df99cc77 kernel: reduce scope of z_new_thread_init()
The core kernel z_setup_new_thread() calls into arch_new_thread(),
which calls back into the core kernel via z_new_thread_init().

Move everything that doesn't have to be in z_new_thread_init() to
z_setup_new_thread() and convert to an inline function.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-04-21 18:45:45 -04:00
Andy Ross 7832738ae9 kernel/timeout: Make timeout arguments an opaque type
Add a k_timeout_t type, and use it everywhere that kernel API
functions were accepting a millisecond timeout argument.  Instead of
forcing milliseconds everywhere (which are often not integrally
representable as system ticks), do the conversion to ticks at the
point where the timeout is created.  This avoids an extra unit
conversion in some application code, and allows us to express the
timeout in units other than milliseconds to achieve greater precision.

The existing K_MSEC() et. al. macros now return initializers for a
k_timeout_t.

The K_NO_WAIT and K_FOREVER constants have now become k_timeout_t
values, which means they cannot be operated on as integers.
Applications which have their own APIs that need to inspect these
vs. user-provided timeouts can now use a K_TIMEOUT_EQ() predicate to
test for equality.

Timer drivers, which receive an integer tick count in ther
z_clock_set_timeout() functions, now use the integer-valued
K_TICKS_FOREVER constant instead of K_FOREVER.

For the initial release, to preserve source compatibility, a
CONFIG_LEGACY_TIMEOUT_API kconfig is provided.  When true, the
k_timeout_t will remain a compatible 32 bit value that will work with
any legacy Zephyr application.

Some subsystems present timeout (or timeout-like) values to their own
users as APIs that would re-use the kernel's own constants and
conventions.  These will require some minor design work to adapt to
the new scheme (in most cases just using k_timeout_t directly in their
own API), and they have not been changed in this patch, instead
selecting CONFIG_LEGACY_TIMEOUT_API via kconfig.  These subsystems
include: CAN Bus, the Microbit display driver, I2S, LoRa modem
drivers, the UART Async API, Video hardware drivers, the console
subsystem, and the network buffer abstraction.

k_sleep() now takes a k_timeout_t argument, with a k_msleep() variant
provided that works identically to the original API.

Most of the changes here are just type/configuration management and
documentation, but there are logic changes in mempool, where a loop
that used a timeout numerically has been reworked using a new
z_timeout_end_calc() predicate.  Also in queue.c, a (when POLL was
enabled) a similar loop was needlessly used to try to retry the
k_poll() call after a spurious failure.  But k_poll() does not fail
spuriously, so the loop was removed.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-03-31 19:40:47 -04:00
Andrew Boie 28be793cb6 kernel: delete separate logic for priv stacks
This never needed to be put in a separate gperf table.
Privilege mode stacks can be generated by the main
gen_kobject_list.py logic, which we do here.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-17 20:11:27 +02:00
Andrew Boie fb1c29475f kernel: zero app shmem bss via SYS_INIT
Doesn't need to be directly in init.c.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 21:40:52 -04:00
Andrew Boie 80a0d9d16b kernel: interrupt/idle stacks/threads as array
The set of interrupt stacks is now expressed as an array. We
also define the idle threads and their associated stacks this
way. This allows for iteration in cases where we have multiple
CPUs.

There is now a centralized declaration in kernel_internal.h.

On uniprocessor systems, z_interrupt_stacks has one element
and can be used in the same way as _interrupt_stack.

The IRQ stack for CPU 0 is now set in init.c instead of in
arch code.

The extern definition of the main thread stack is now removed,
this doesn't need to be in a header.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-03-16 23:17:36 +02:00
Andy Ross eefd3daa81 kernel/smp: arch/x86_64: Address race with CPU migration
Use of the _current_cpu pointer cannot be done safely in a preemptible
context.  If a thread is preempted and migrates to another CPU, the
old CPU record will be wrong.

Add a validation assert to the expression that catches incorrect
usages, and fix up the spots where it was wrong (most important being
a few uses of _current outside of locks, and the arch_is_in_isr()
implementation).

Note that the resulting _current expression now requires locking and
is going to be somewhat slower.  Longer term it's going to be better
to augment the arch API to allow SMP architectures to implement a
faster "get current thread pointer" action than this default.

Note also that this change means that "_current" is no longer
expressible as an lvalue (long ago, it was just a static variable), so
the places where it gets assigned now assign to _current_cpu->current
instead.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-08 08:51:04 -05:00
Anas Nashif 73008b427c tracing: move headers under include/tracing
Move tracing.h to include/tracing/ to align with subsystem reorg.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-02-07 15:58:05 -05:00
Andrew Boie d1f50122f9 kernel: move timing externs to public header
These arch_timing_ defines get used in certain timer
drivers and need to be in the public include space,
and not the private kernel headers.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-02-06 23:07:37 -05:00
Andy Ross 96ccc46e03 kernel/sched: Put k_thread_start() under a single lock
Similar to the suspend refactoring earlier, this really nees to be
done in an atomic block.  There were two confirmable races here,
though it's not completely clear either was being hit in practice:

1. The bit operations in z_mark_thread_as_started() aren't atomic so
   it needs to be protected.

2. The intermediate state in z_ready_thread() could result in a dead
   or suspended thread being added to the ready queue if another
   context tried a simultaneous abort or suspend.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-02-03 09:31:56 -05:00
Andrew Boie 6f654bbafd mempool: use k_malloc heap for ISR allocations
Fixes an issue where calling z_thread_malloc() would
borrow the resource pool of whatever thread happened
to be interrupted at the time.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-01-24 09:27:59 -08:00
Andy Ross 3235451880 kernel/swap: Add SMP "wait for switch" synchronization
On SMP, there is an inherent race when swapping: the old thread adds
itself back to the run queue before calling into the arch layer to do
the context switch.  The former is properly synchronized under the
scheduler lock, and the later operates with interrupts locally
disabled.  But until somewhere in the middle of arch_switch(), the old
thread (that is in the run queue!) does not have complete saved state
that can be restored.

So it's possible for another CPU to grab a thread before it is saved
and try to restore its unsaved register contents (which are garbage --
typically whatever state it had at the last interrupt).

Fix this by leveraging the "swapped_from" pointer already passed to
arch_switch() as a synchronization primitive.  When the switch
implementation writes the new handle value, we know the switch is
complete.  Then we can wait for that in z_swap() and at interrupt
exit.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Andy Ross 86430d8d46 kernel: arch: Clarify output switch handle requirements in arch_switch
The original intent was that the output handle be written through the
pointer in the second argument, though not all architectures used that
scheme.  As it turns out, that write is becoming a synchronization
signal, so it's no longer optional.

Clarify the documentation in arch_switch() about this requirement, and
add an instruction to the x86_64 context switch to implement it as
original envisioned.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2020-01-21 14:47:52 -08:00
Anas Nashif 0ad67650f2 tracing: better positioning of tracing points
Improve positioning of tracing calls. Avoid multiple calls and missing
events because of complex logix. Trace the event where things happen
really.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-01-09 11:21:19 -05:00
Anas Nashif 9e3e7f6dda kernel: use 'thread' for thread variable consistently
We have been using thread, th and t for thread variables making the code
less readable, especially when we use t for timeouts and other time
related variables. Just use thread where possible and keep things
consistent.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-12-21 19:57:57 -05:00
Kumar Gala a8171db6a6 doc: Fix warnings associated with 'unbalanced grouping commands'
Builds of docs with doxygen 1.8.16 has a number of warnings of the form:
'warning: unbalanced grouping commands'.  Fix those warnings be either
balancing the group command or removing it.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-12-12 12:39:35 -06:00
Flavio Ceolin 91fd6d0866 kernel: thread: Fix randomness problem with stack pointer random
In some platforms the size of size_t can be different of 4 bytes. Use
sys_rand_get to proper fill this variable.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2019-11-15 13:43:32 -08:00
Andrew Boie 4f77c2ad53 kernel: rename z_arch_ to arch_
Promote the private z_arch_* namespace, which specifies
the interface between the core kernel and the
architecture code, to a new top-level namespace named
arch_*.

This allows our documentation generation to create
online documentation for this set of interfaces,
and this set of interfaces is worth treating in a
more formal way anyway.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-11-07 15:21:46 -08:00
Stephanos Ioannidis 37d6241ecf kernel: Un-inline z_new_thread_init.
This commit modifies the z_new_thread_init function, that was
previously declared as ALWAYS_INLINE to be a normal function.

z_new_thread_init function is only called by the z_arch_new_thread
function and, since this is not a performance-critical function, there
is no good justification for inlining it.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Stephanos Ioannidis 2d7460482d headers: Refactor kernel and arch headers.
This commit refactors kernel and arch headers to establish a boundary
between private and public interface headers.

The refactoring strategy used in this commit is detailed in the issue

This commit introduces the following major changes:

1. Establish a clear boundary between private and public headers by
  removing "kernel/include" and "arch/*/include" from the global
  include paths. Ideally, only kernel/ and arch/*/ source files should
  reference the headers in these directories. If these headers must be
  used by a component, these include paths shall be manually added to
  the CMakeLists.txt file of the component. This is intended to
  discourage applications from including private kernel and arch
  headers either knowingly and unknowingly.

  - kernel/include/ (PRIVATE)
    This directory contains the private headers that provide private
   kernel definitions which should not be visible outside the kernel
   and arch source code. All public kernel definitions must be added
   to an appropriate header located under include/.

  - arch/*/include/ (PRIVATE)
    This directory contains the private headers that provide private
   architecture-specific definitions which should not be visible
   outside the arch and kernel source code. All public architecture-
   specific definitions must be added to an appropriate header located
   under include/arch/*/.

  - include/ AND include/sys/ (PUBLIC)
    This directory contains the public headers that provide public
   kernel definitions which can be referenced by both kernel and
   application code.

  - include/arch/*/ (PUBLIC)
    This directory contains the public headers that provide public
   architecture-specific definitions which can be referenced by both
   kernel and application code.

2. Split arch_interface.h into "kernel-to-arch interface" and "public
  arch interface" divisions.

  - kernel/include/kernel_arch_interface.h
    * provides private "kernel-to-arch interface" definition.
    * includes arch/*/include/kernel_arch_func.h to ensure that the
     interface function implementations are always available.
    * includes sys/arch_interface.h so that public arch interface
     definitions are automatically included when including this file.

  - arch/*/include/kernel_arch_func.h
    * provides architecture-specific "kernel-to-arch interface"
     implementation.
    * only the functions that will be used in kernel and arch source
     files are defined here.

  - include/sys/arch_interface.h
    * provides "public arch interface" definition.
    * includes include/arch/arch_inlines.h to ensure that the
     architecture-specific public inline interface function
     implementations are always available.

  - include/arch/arch_inlines.h
    * includes architecture-specific arch_inlines.h in
     include/arch/*/arch_inline.h.

  - include/arch/*/arch_inline.h
    * provides architecture-specific "public arch interface" inline
     function implementation.
    * supersedes include/sys/arch_inline.h.

3. Refactor kernel and the existing architecture implementations.

  - Remove circular dependency of kernel and arch headers. The
   following general rules should be observed:

    * Never include any private headers from public headers
    * Never include kernel_internal.h in kernel_arch_data.h
    * Always include kernel_arch_data.h from kernel_arch_func.h
    * Never include kernel.h from kernel_struct.h either directly or
     indirectly. Only add the kernel structures that must be referenced
     from public arch headers in this file.

  - Relocate syscall_handler.h to include/ so it can be used in the
   public code. This is necessary because many user-mode public codes
   reference the functions defined in this header.

  - Relocate kernel_arch_thread.h to include/arch/*/thread.h. This is
   necessary to provide architecture-specific thread definition for
   'struct k_thread' in kernel.h.

  - Remove any private header dependencies from public headers using
   the following methods:

    * If dependency is not required, simply omit
    * If dependency is required,
      - Relocate a portion of the required dependencies from the
       private header to an appropriate public header OR
      - Relocate the required private header to make it public.

This commit supersedes #20047, addresses #19666, and fixes #3056.

Signed-off-by: Stephanos Ioannidis <root@stephanos.io>
2019-11-06 16:07:32 -08:00
Andrew Boie 979b17f243 kernel: activate arch interface headers
Duplicate definitions elsewhere have been removed.

A couple functions which are defined by the arch interface
to be non-inline, but were implemented inline by native_posix
and intel64, have been moved to non-inline.

Some missing conditional compilation for z_arch_irq_offload()
has been fixed, as this is an optional feature.

Some massaging of native_posix headers to get everything
in the right scope.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-21 10:13:38 -07:00
Andy Ross 8bc3b6f673 arch/x86/intel64: Fix assumption with dummy threads
The intel64 switch implementation doesn't actually use a switch handle
per se, just the raw thread struct pointers which get stored into the
handle field.  This works fine for normally initialized threads, but
when switching out of a dummy thread at initialization, nothing has
initialized that field and the code was dumping registers into the
bottom of memory through the resulting NULL pointer.

Fix this by skipping the load of the field value and just using an
offset instead to get the struct address, which is actually slightly
faster anyway (a SUB immediate instruction vs. the load).

Actually for extra credit we could even move the switch_handle field
to the top of the thread struct and eliminate the instruction
entirely, though if we did that it's probably worth adding some
conditional code to make the switch_handle field disappear entirely.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-10-19 12:09:32 -07:00
Andrew Boie 8ffff144ea kernel: add architecture interface headers
include/sys/arch_inlines.h will contain all architecture APIs
that are used by public inline functions and macros,
with implementations deriving from include/arch/cpu.h.

kernel/include/arch_interface.h will contain everything
else, with implementations deriving from
arch/*/include/kernel_arch_func.h.

Instances of duplicate documentation for these APIs have been
removed; implementation details have been left in place.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-11 13:30:46 -07:00
Andrew Boie cb1dd7465b kernel: remove vestigal printk references
Logging is now used for these situations.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-01 16:15:06 -05:00
Andrew Boie 99b3f8617e kernel: use logging for userspace errors
We want to use a single API for this in kernel code.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-10-01 10:23:03 -07:00
Andrew Boie 8f0bb6afe6 tracing: simplify idle thread detection
We now define z_is_idle_thread_object() in ksched.h,
and the repeated definitions of a function that does
the same thing now changed to just use the common
definition.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie 0095ed5384 kernel: rename z_is_idle_thread()
This takes an entry point and not a thread as argument.
Rename to z_is_idle_thread_entry() to make this clearer.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie 2c1fb971e0 kernel: rename __swap
This is part of the core kernel -> architecture API and
has been renamed to z_arch_swap().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie fe031611fd kernel: rename main/idle thread/stacks
The main and idle threads, and their associated stacks,
were being referenced in various parts of the kernel
with no central definition. Expose these in kernel_internal.h
and namespace with z_ appropriately.

The main and idle threads were being defined statically,
with another variable exposed to contain their pointer
value. This wastes a bit of memory and isn't accessible
to user threads anyway, just expose the actual thread
objects.

Redundance MAIN_STACK_SIZE and IDLE_STACK_SIZE defines
in init.c removed, just use the Kconfigs they derive
from.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie e6654103ba kernel: rename boot time globals
These are renamed to z_timestamp_main and z_timestamp_idle,
and now specified in kernel_internal.h.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie 4ad9f687df kernel: rename thread return value functions
z_set_thread_return_value is part of the core kernel -> arch
interface and has been renamed to z_arch_thread_return_value_set.

z_set_thread_return_value_with_data renamed to
z_thread_return_value_set_with_data for consistency.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie e1ec59f9c2 kernel: renamespace z_is_in_isr()
This is part of the core kernel -> architecture interface
and is appropriately renamed z_arch_is_in_isr().

References from test cases changed to k_is_in_isr().

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie 61901ccb4c kernel: rename z_new_thread()
This is part of the core kernel -> architecture interface
and should have a leading prefix z_arch_.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Andrew Boie 9e1dda8804 timing_info: rename globals
Global variables related to timing information have been
renamed to be prefixed with z_arch, with naming arranged
in increasing order of specificity.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-09-30 15:25:55 -04:00
Anas Nashif 4abbd54cd5 tracing: remove useless ifdefing for CONFIG_TRACING
Tracing functions are noop if CONFIG_TRACING is disabled.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-09-30 10:49:37 -04:00
Peter A. Bigot 5639ea07f8 kernel: timeout: remove unused callback parameter from init function
The callback function has been ignored in z_timeout_init() since the
timer rework in fall 2018.  Passing real handlers to it in code is
distracting when they will be overridden by whatever callback is
provided in z_add_timeout().

As this function is an internal API deprecation is not necessary.
Remove the parameter and change all call sites to drop the argument.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-09-28 15:41:18 -04:00
Andy Ross cb3964f04f kernel/sched: Reset time slice on swap in SMP
In uniprocessor mode, the kernel knows when a context switch "is
coming" because of the cache optimization and can use that to do
things like update time slice state.  But on SMP the scheduler state
may be updated on the other CPU at any time, so we don't know that a
switch is going to happen until the last minute.

Expose reset_time_slice() as a public function and call it when needed
out of z_swap().

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-26 16:54:06 -04:00
Andy Ross 6564974bae userspace: Support for split 64 bit arguments
System call arguments, at the arch layer, are single words.  So
passing wider values requires splitting them into two registers at
call time.  This gets even more complicated for values (e.g
k_timeout_t) that may have different sizes depending on configuration.
This patch adds a feature to gen_syscalls.py to detect functions with
wide arguments and automatically generates code to split/unsplit them.

Unfortunately the current scheme of Z_SYSCALL_DECLARE_* macros won't
work with functions like this, because for N arguments (our current
maximum N is 10) there are 2^N possible configurations of argument
widths.  So this generates the complete functions for each handler and
wrapper, effectively doing in python what was originally done in the
preprocessor.

Another complexity is that traditional the z_hdlr_*() function for a
system call has taken the raw list of word arguments, which does not
work when some of those arguments must be 64 bit types.  So instead of
using a single Z_SYSCALL_HANDLER macro, this splits the job of
z_hdlr_*() into two steps: An automatically-generated unmarshalling
function, z_mrsh_*(), which then calls a user-supplied verification
function z_vrfy_*().  The verification function is typesafe, and is a
simple C function with exactly the same argument and return signature
as the syscall impl function.  It is also not responsible for
validating the pointers to the extra parameter array or a wide return
value, that code gets automatically generated.

This commit includes new vrfy/msrh handling for all syscalls invoked
during CI runs.  Future commits will port the less testable code.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-09-12 11:31:50 +08:00
Andy Ross 6f13980fc7 kernel/mutex: Fix locking to be SMP-safe
The mutex locking was written to use k_sched_lock(), which doesn't
work as a synchronization primitive if there is another CPU running
(it prevents the current CPU from preempting the thread, it says
nothing about what the others are doing).

Use the pre-existing spinlock for all synchronization.  One wrinkle is
that the priority code was needing to call z_thread_priority_set(),
which is a rescheduling call that cannot be called with a lock held.
So that got split out with a low level utility that can update the
schedule state but allow the caller to defer yielding until later.

Fixes #17584

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-08-22 17:58:16 -04:00
Andrew Boie 8915e41b7b userspace: adjust arch memory domain interface
The current API was assuming too much, in that it expected that
arch-specific memory domain configuration is only maintained
in some global area, and updates to domains that are not currently
active have no effect.

This was true when all memory domain state was tracked in page
tables or MPU registers, but no longer works when arch-specific
memory management information is stored in thread-specific areas.

This is needed for: #13441 #13074 #15135

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-08-05 13:25:50 +02:00
Andrew Boie 71ce8ceb18 kernel: consolidate error handling code
* z_NanoFatalErrorHandler() is now moved to common kernel code
  and renamed z_fatal_error(). Arches dump arch-specific info
  before calling.
* z_SysFatalErrorHandler() is now moved to common kernel code
  and renamed k_sys_fatal_error_handler(). It is now much simpler;
  the default policy is simply to lock interrupts and halt the system.
  If an implementation of this function returns, then the currently
  running thread is aborted.
* New arch-specific APIs introduced:
  - z_arch_system_halt() simply powers off or halts the system.
* We now have a standard set of fatal exception reason codes,
  namespaced under K_ERR_*
* CONFIG_SIMPLE_FATAL_ERROR_HANDLER deleted
* LOG_PANIC() calls moved to k_sys_fatal_error_handler()

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-25 15:06:58 -07:00
Nicholas Lowell f9ae2d8e64 Includes: #ifdef CONFIG_USE_SWITCH instead of #if to avoid undef warning
Hitting wundef in kernel_structs.h, switching to match other instances
where #ifdef is used instead of #if

Signed-off-by: Nicholas Lowell <nlowell@lexmark.com>
2019-07-14 04:58:47 -07:00
Ioannis Glaropoulos 5d423b8078 userspace: minor typo fixes in various places
System call arguments are indexed from 1 to 6, so arg0
is corrected to arg1 in two occasions. In addition, the
ARM function for system calls is now called z_arm_do_syscall,
so we update the inline comment in __svc handler.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-07-02 19:18:48 -04:00
Andrew Boie 38129ce1a6 kernel: fix CONFIG_THREAD_NAME from user mode.
This mechanism had multiple problems:

- Missing parameter documentation strings.
- Multiple calls to k_thread_name_set() from user
  mode would leak memory, since the copied string was never
  freed
- k_thread_name_get() returns memory to user mode
  with no guarantees on whether user mode can actually
  read it; in the case where the string was in thread
  resource pool memory (which happens when k_thread_name_set()
  is called from user mode) it would never be readable.
- There was no test case coverage for these functions
  from user mode.

To properly fix this, thread objects now have a buffer region
reserved specifically for the thread name. Setting the thread
name copies the string into the buffer. Getting the thread name
with k_thread_name_get() still returns a pointer, but the
system call has been removed. A new API k_thread_name_copy()
is introduced to copy the thread name into a destination buffer,
and a system call has been provided for that instead.

We now have full test case coverge for these APIs in both user
and supervisor mode.

Some of the code has been cleaned up to place system call
handler functions in proximity with their implementations.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-07-01 16:29:45 -07:00
Anas Nashif a2fd7d70ec cleanup: include/: move misc/util.h to sys/util.h
move misc/util.h to sys/util.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 1859244b64 cleanup: include/: move misc/rb.h to sys/rb.h
move misc/rb.h to sys/rb.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00
Anas Nashif 9ab2a56751 cleanup: include/: move misc/printk.h to sys/printk.h
move misc/printk.h to sys/printk.h and
create a shim for backward-compatibility.

No functional changes to the headers.
A warning in the shim can be controlled with CONFIG_COMPAT_INCLUDES.

Related to #16539

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-06-27 22:55:49 -04:00