Commit graph

224 commits

Author SHA1 Message Date
Carlo Caione
e2333269ae cache: Introduce external cache controller system support
The cache API currently shipped in Zephyr is assuming that the cache
controller is always on-core thus managed at the arch level. This is not
always the case because many SoCs rely on external cache controllers as
a peripheral external to the core (for example PL310 cache controller
and the L2Cxxx family). In some cases you also want a single driver to
control a whole set of cache controllers.

Rework the cache code introducing support for external cache
controllers.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-05-08 07:00:33 +02:00
Daniel Leung
783b20712e arch: implement brute force find_lsb_set()
On RISC-V 64-bit, GCC complains about undefined reference
to 'ffs' via __builtin_ffs(). So implement a brute force
way to do it. Once the toolchain has __builtin_ffs(),
this can be reverted.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-05-07 13:36:22 -04:00
Nicolas Pitre
949ef7c660 Kconfig: clean up FPU and FPU_SHARING entries
CONFIG_FPU: The architecture dependency list is redundant.
Having CPU_HAS_FPU being selected by those archs as a dependency
is sufficient and cleaner.

CONFIG_FPU_SHARING: The default should always be y to be on the safe
side here, but as a compromise for not affecting existing config, let's
move the default selection local to those configs that care, again to
avoid a growing list of conditionals here. Adjust the help text which
applies to more than just Cortex-M.

Signed-off-by: Nicolas Pitre <npitre@baylibre.com>
2021-05-03 11:56:50 +02:00
Daniel Leung
1117169980 kernel: generate placeholders for kobj tables before final build
Due to the use of gperf to generate hash table for kobjects,
the addresses of these kobjects cannot change during the last
few phases of linking (especially between zephyr_prebuilt.elf
and zephyr.elf). Because of this, the gperf generated data
needs to be placed at the end of memory to avoid pushing symbols
around in memory. This prevents moving these generated blocks
to earlier sections, for example, pinned data section needed
for demand paging. So create placeholders for use in
intermediate linking to reserve space for these generated blocks.
Due to uncertainty on the size of these blocks, more space is
being reserved which could result in wasted space. Though, this
retains the use of hash table for faster lookup.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-27 13:32:00 -04:00
Morten Priess
a0dd44c5e0 arch: select HAS_DTS for SPARC
With PR#34449, architectures that use DTS must select the HAS_DTS
configuration.

Signed-off-by: Morten Priess <mtpr@oticon.com>
2021-04-26 13:42:10 +02:00
Daniel Leung
09e8db3d68 kernel: enable using timing subsys to collect paging histograms
This adds bits to the paging timing histogram collection routines
so they can use timing functions to collect execution time data.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung
1eba3545c1 x86: timing: allow userspace to convert cycles to ns
The variable tsc_freq is not accessible in user thread
and is thus preventing user threads to convert cycles to ns.
So make tsc_freq available globally in default memory
domain so conversion is possible.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung
8eea5119d7 kernel: mmu: demand paging execution time histogram
This adds the bits to record execution time of eviction selection,
and backing store page-in/page-out in histograms.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung
ae86519819 kernel: mmu: collect more demand paging statistics
This adds more bits to gather statistics on demand paging,
e.g. clean vs dirty pages evicted, # page faults with
IRQ locked/unlocked, etc.

Also extends this to gather per-thread demand paging
statistics.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-06 16:43:55 -04:00
Daniel Leung
64e99dfcf6 xtensa: change CONFIG_ATOMIC_OPERATIONS_ARCH to imply
Xtensa cores are highly configurable so each SoC may not have
the needed instructions for the hardware assisted atomic
operations. So instead of selecting the arch-specific atomic
operations kconfig, do a "imply" instead. So SoC or board
configs can disable this.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-04-02 07:23:33 -04:00
Carlo Caione
3539c2fbb3 arm/arm64: Make ARM64 a standalone architecture
Split ARM and ARM64 architectures.

Details:

- CONFIG_ARM64 is decoupled from CONFIG_ARM (not a subset anymore)
- Arch and include AArch64 files are in a dedicated directory
  (arch/arm64 and include/arch/arm64)
- AArch64 boards and SoC are moved to soc/arm64 and boards/arm64
- AArch64-specific DTS files are moved to dts/arm64
- The A72 support for the bcm_vk/viper board is moved in the
  boards/bcm_vk/viper directory

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-31 10:34:33 -05:00
Martin Åberg
83f733ce59 SPARC: improve fatal log
The fatal log now contains
- Trap type in human readable representation
- Integer registers visible to the program when trap was taken
- Special register values such as PC and PSR
- Backtrace with PC and SP

If CONFIG_EXTRA_EXCEPTION_INFO is enabled, then all the above is
logged. If not, only the special registers are logged.

The format is inspired by the GRMON debug monitor and TSIM simulator.
A quick guide on how to use the values is in fatal.c.

It now looks like this:

E: tt = 0x02, illegal_instruction
E:
E:       INS        LOCALS     OUTS       GLOBALS
E:   0:  00000000   f3900fc0   40007c50   00000000
E:   1:  00000000   40004bf0   40008d30   40008c00
E:   2:  00000000   40004bf4   40008000   00000003
E:   3:  40009158   00000000   40009000   00000002
E:   4:  40008fa8   40003c00   40008fa8   00000008
E:   5:  40009000   f3400fc0   00000000   00000080
E:   6:  4000a1f8   40000050   4000a190   00000000
E:   7:  40002308   00000000   40001fb8   000000c1
E:
E: psr: f30000c7   wim: 00000008   tbr: 40000020   y: 00000000
E:  pc: 4000a1f4   npc: 4000a1f8
E:
E:       pc         sp
E:  #0   4000a1f4   4000a190
E:  #1   40002308   4000a1f8
E:  #2   40003b24   4000a258

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2021-03-25 17:48:23 +01:00
Carlo Caione
807991e15f AArch64: Do not use CONFIG_GEN_PRIV_STACKS
We are setting CONFIG_GEN_PRIV_STACKS when AArch64 actually uses a
statically allocated privileged stack.

This error was not captured by the tests because we only verify whether
a read/write to a privileged stack is failing, but it can fail for a lot
of reasons including when the pointer to the privileged stack is not
initialized at all, like in this case.

With this patch we deselect CONFIG_GEN_PRIV_STACKS and we fix the
mem_protect/userspace test to correctly probe the privileged stack.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-03-25 07:23:19 -04:00
Daniel Leung
e211d3a999 kernel: remove CONFIG_KERNEL_LINK_IN_VIRT
There actually is no need for a separate kconfig here, as
the kernel VM address and SRAM address can be used to figure
out if the kernel is linked in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-03-16 15:03:44 -04:00
Andy Ross
820c94e5dd arch/xtensa: Inline atomics
The xtensa atomics layer was written with hand-coded assembly that had
to be called as functions.  That's needlessly slow, given that the low
level primitives are a two-instruction sequence.  Ideally the compiler
should see this as an inline to permit it to better optimize around
the needed barriers.

There was also a bug with the atomic_cas function, which had a loop
internally instead of returning the old value synchronously on a
failed swap.  That's benign right now because our existing spin lock
does nothing but retry it in a tight loop anyway, but it's incorrect
per spec and would have caused a contention hang with more elaborate
algorithms (for example a spinlock with backoff semantics).

Remove the old implementation and replace with a much smaller inline C
one based on just two assembly primitives.

This patch also contains a little bit of refactoring to address the
scheme has been split out into a separate header for each, and the
ATOMIC_OPERATIONS_CUSTOM kconfig has been renamed to
ATOMIC_OPERATIONS_ARCH to better capture what it means.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2021-03-08 11:14:27 -05:00
Daniel Leung
2816c17a09 x86: allow linking in virtual address space
This adds the pieces to allow the kernel to be linked
in virtual address space.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Daniel Leung
ece9cad858 kernel: add CONFIG_SRAM_OFFSET
This adds a new kconfig CONFIG_SRAM_OFFSET to specify the offset
from beginning of SRAM where the kernel begins. On x86 and
PC compatible platforms, the first 1MB of RAM is reserved and
Zephyr should not link anything there. However, this 1MB still
needs to be mapped by the MMU to access various platform related
information. CONFIG_SRAM_OFFSET serves similar function as
CONFIG_KERNEL_VM_OFFSET and is needed for proper phys/virt
address translations.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-02-22 14:55:28 -05:00
Ioannis Glaropoulos
86c1b57103 arm: cortex_m: select by default FP sharing mode when using the FPU
For applications that make use of the FPU in cortex m,
we enforce the FPU sharing registers mode, because the
compiler, under certain optimization regimes, may use
FP instructions and create FP context in any thread,
so the unshared registers mode is not practically
supported.

In addition to that we force FPU_SHARING to depend on
MULTITHREADING, as FPU sharing mode does not make sense
outside the normal multi-threaded builds.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2021-02-02 17:58:58 -05:00
Andrew Boie
77861037d9 x86: map all RAM if ACPI
ACPI tables can lurk anywhere. Map all memory so they can be
read.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
14c5d1f1f7 kernel: add CONFIG_ARCH_MAPS_ALL_RAM
Some arches like x86 need all memory mapped so that they can
fetch information placed arbitrarily by firmware, like ACPI
tables.

Ensure that if this is the case, the kernel won't accidentally
clobber it by thinking the relevant virtual memory is unused.
Otherwise this has no effect on page frame management.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
ed22064e27 x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
56a9e7b91e arch: add CONFIG_DEMAND_PAGING
Indicates at the kernel level that demand paging is active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
b0b7756756 x86: pre-allocate address space
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.

We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.

The default address space size is now 8MB, but this can be
tuned by the application.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
893822fbda arch: remove KERNEL_RAM_SIZE
We don't map all RAM at boot any more, just the kernel image.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
73a3e05e40 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Andrew Boie
69355d13a8 arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-23 19:47:23 -05:00
Anas Nashif
6f61663695 Revert "arch: add KERNEL_VM_OFFSET"
This reverts commit fd2434edbd.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
db0732f11d Revert "kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES"
This reverts commit 9d2ebfff58.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
34e9c09330 Revert "arch: remove KERNEL_RAM_SIZE"
This reverts commit 73561be500.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
e980848ba7 Revert "x86: pre-allocate address space"
This reverts commit 64f05d443a.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
0f24e09bcf Revert "arch: add CONFIG_DEMAND_PAGING"
This reverts commit 48cc63b4a3.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Anas Nashif
adff757c72 Revert "x86: implement demand paging APIs"
This reverts commit 7711c9a82d.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2021-01-22 08:39:45 -05:00
Andrew Boie
7711c9a82d x86: implement demand paging APIs
All arch_ APIs and macros are implemented, and the page fault
handling code will call into the core kernel.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
48cc63b4a3 arch: add CONFIG_DEMAND_PAGING
Indicates at the kernel level that demand paging is active.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
64f05d443a x86: pre-allocate address space
We no longer use a page pool to draw memory pages when doing
memory map operations. We now preallocate the entire virtual
address space so no allocations are ever necessary when mapping
memory.

We still need memory to clone page tables, but this is now
expressed by a new Kconfig X86_MAX_ADDITIONAL_MEM_DOMAINS
which has much clearer semantics than specifying the number
of pages in the pool.

The default address space size is now 8MB, but this can be
tuned by the application.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
73561be500 arch: remove KERNEL_RAM_SIZE
We don't map all RAM at boot any more, just the kernel image.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
9d2ebfff58 kernel: add CONFIG_ARCH_HAS_RESERVED_PAGE_FRAMES
We will need this to run on x86 with PC-like hardware.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Andrew Boie
fd2434edbd arch: add KERNEL_VM_OFFSET
This is only needed if the base address of SRAM doesn't
have the same alignment as the base address of the virtual
address space.

Fix the calculations on X86 where this is the case.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2021-01-21 16:47:00 -05:00
Carlo Caione
e77c841023 cache: Expand the APIs for cache flushing
The only two supported operations for data caches in the cache framework
are currently arch_dcache_flush() and arch_dcache_invd().

This is quite restrictive because for some architectures we also want to
control i-cache and in general we want a finer control over what can be
flushed, invalidated or cleaned. To address these needs this patch
expands the set of operations that can be performed on data and
instruction caches, adding hooks for the operations on the whole cache,
a specific level or a specific address range.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione
20f59c8f1e cache: Rename CACHE_FLUSHING to CACHE_MANAGEMENT
The new APIs are not only dealing with cache flushing. Rename the
Kconfig symbol to CACHE_MANAGEMENT to better reflect this change.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Carlo Caione
923b3be890 kconfig: Unify CACHE_* options
The kconfig options to configure the cache flushing framework are
currently living in the arch-specific kconfigs of ARC and X86 (32-bit)
architectures even though these are defining the same things.

Move the common symbols in one place accessible by all the architectures
and create a menu for those.

Leave the default values in the arch-specific locations.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2021-01-19 14:31:02 -05:00
Flavio Ceolin
47e0621bb7 power: Remove not used build option
There is no usage of BOOTLOADER_CONTEXT_RESTORE since quark support
was removed.

Signed-off-by: Flavio Ceolin <flavio.ceolin@intel.com>
2021-01-08 06:49:43 -05:00
Daniel Leung
0aa61793de x86: remove custom switch to main thread function
The original idea of using a custom switch to main thread
function is to make sure the buffer to save floating point
registers are aligned correctly or else exception would be
raised when saving/restoring those registers. Since
the struct of the buffer is defined with alignment hint
to toolchain, the alignment will be enforced by toolchain
as long as the k_thread struct variable is a dedicated,
declared variable. So there is no need for the custom
switch to main thread function anymore.

This also allows the stack usage calculation of
the interrupt stack to function properly as the end of
the interrupt stack is not being used for the dummy
thread anymore.

Signed-off-by: Daniel Leung <daniel.leung@intel.com>
2021-01-04 16:59:59 -08:00
Anas Nashif
142c3060e7 power: move kconfigs from arch/ to power/
Move all Kconfigs where they belong and in one place.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Anas Nashif
dd931f93a2 power: standarize PM Kconfigs and cleanup
- Remove SYS_ prefix
- shorten POWER_MANAGEMENT to just PM
- DEVICE_POWER_MANAGEMENT -> PM_DEVICE

and use PM_ as the prefix for all PM related Kconfigs

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2020-12-09 15:18:29 -05:00
Martin Åberg
53a4acb2dc SPARC: add FPU support
This change adds full shared floating point support for the SPARC
architecture.

All SPARC floating point registers are scratch registers with respect
to function call boundaries. That means we only have to save floating
point registers when switching threads in ISR. The registers are
stored to the corresponding thread stack.

FPU is disabled when calling ISR. Any attempt to use FPU in ISR
will generate the fp_disabled trap which causes Zephyr fatal error.

- This commit adds no new thread state.
- All FPU contest save/restore is synchronous and lazy FPU context
  switch is not implemented.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-12-04 14:33:43 +02:00
Andrew Boie
5a58ad508c arch: mem protect Kconfig cleanups
Adds a new CONFIG_MPU which is set if an MPU is enabled. This
is a menuconfig will some MPU-specific options moved
under it.

MEMORY_PROTECTION and SRAM_REGION_PERMISSIONS have been merged.
This configuration depends on an MMU or MPU. The protection
test is updated accordingly.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2020-11-18 08:02:08 -05:00
Martin Åberg
feae3249b2 sparc: add support for thread local storage
Adds the necessary bits to initialize TLS in the stack
area and sets up CPU registers during context switch. Register g7 is
used to point to the thread data. Thread data is accessed with negative
offsets from g7.

Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-11-13 14:53:55 -08:00
Martin Åberg
07160fa153 arch: Add SPARC processor architecture
SPARC is an open and royalty free processor architecture.

This commit provides SPARC architecture support to Zephyr. It is
compatible with the SPARC V8 specification and the SPARC ABI and is
independent of processor implementation.

Functionality specific to SPRAC processor implementations should
go in soc/sparc. One example is the LEON3 SOC which is part of this
patch set.

The architecture port is fully SPARC ABI compatible, including trap
handlers and interrupt context.

Number of implemented register windows can be configured.

Some SPARC V8 processors borrow the CASA (compare-and-swap) atomic
instructions from SPARC V9. An option has been defined in the
architecture port to forward the corresponding code-generation option
to the compiler.

Stack size related config options have been defined in sparc/Kconfig
to match the SPARC ABI.

Co-authored-by: Nikolaus Huber <nikolaus.huber.melk@gmail.com>
Signed-off-by: Martin Åberg <martin.aberg@gaisler.com>
2020-11-13 14:53:55 -08:00
Carlo Caione
d770880a71 kconfig: userspace: Select THREAD_STACK_INFO
stack_info is needed for userspace and TLS. Select it when enabling
userspace.

Signed-off-by: Carlo Caione <ccaione@baylibre.com>
2020-11-12 15:57:40 -05:00