Commit graph

1385 commits

Author SHA1 Message Date
Andrew Boie
2cfeba8507 x86: implement interrupt stack trampoline
Upon hard/soft irq or exception entry/exit, handle transitions
off or onto the trampoline stack, which is the only stack that
can be used on the kernel side when the shadow page table
is active. We swap page tables when on this stack.

Adjustments to page tables are now as follows:

- Any adjustments for stack memory access now are always done
  to the user page tables

- Any adjustments for memory domains are now always done to
  the user page tables

- With KPTI, resetting a page now clears the present bit

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Andrew Boie
f093285345 x86: modify MMU APIs for multiple page tables
Current set of APIs and macros assumed that only one set
of page tables would ever be in use.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-14 12:46:36 -05:00
Kumar Gala
1f59e9e825 tests: kernel: gen_isr_table: Exclude platforms test isnt valid on
The test assumes that the last to IRQ number will be free, this isn't a
valid assumption and now that we detect multiple ISRs registering for
the some IRQ line, we see failures because of this assumption on some
platforms.  Exclude those platforms from this test for the time being.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-12 15:35:29 -05:00
Piotr Zięcik
7a49356c77 power: Fix naming of Kconfig options controlling low power states
The SYS_POWER_LOW_POWER_STATE_SUPPORTED and SYS_POWER_LOW_POWER_STATE
suggests one low power state but these options control multiple
low power state. This commit uses plural in the names to indicate
that.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-12 07:46:32 -05:00
Andy Ross
d653e6868e tests/kernel/schedule_api: Bump stack size and unify stacks
The new spinlock validation features combined with spinlockification
have increased stack usage a bit in CONFIG_ASSERT builds, but this is
a good feature we want to keep.  This test was bumping into limits, so
increase the size from 512 to 640 bytes.

Unfortunately, this is also a huge test that creates a LOT of those
stacks across different test cases, so that minor bump blows us past
the 64k SRAM limit on a bunch of boards.  So unify all those stacks
that are only ever used in one case at a time so the memory can be
shared.  Now there's one fixed stack, named "tstack", and one array
"tstacks".  Much smaller.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
1bf9bd04b1 kernel: Add _unlocked() variant to context switch primitives
These functions, for good design reason, take a locking key to
atomically release along with the context swtich.  But there's still a
common pattern in code to do a switch unconditionally by passing
irq_lock() directly.  On SMP that's a little hurtful as it spams the
global lock.  Provide an _unlocked() variant for
_Swap/_reschedule/_pend_curr for simplicity and efficiency.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Andy Ross
aa6e21c24c kernel: Split _Swap() API into irqlock and spinlock variants
We want a _Swap() variant that can atomically release/restore a
spinlock state in addition to the legacy irqlock.  The function as it
was is now named "_Swap_irqlock()", while _Swap() now refers to a
spinlock and takes two arguments.  The former will be going away once
existing users (not that many!  Swap() is an internal API, and the
long port away from legacy irqlocking is going to be happening mostly
in drivers) are ported to spinlocks.

Obviously on uniprocessor setups, these produce identical code.  But
SMP requires that the correct API be used to maintain the global lock.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-08 14:49:39 -05:00
Piotr Zięcik
d02e3ebd4c power: Eliminate SYS_PM_* power states.
The power management framework used two different abstractions
to describe power states. The SYS_PM_* given coarse information
what kind of power state (low power or deep sleep) was used,
while the SYS_POWER_STATE_* abstraction provided information
about particular power mode.

This commit removes the SYS_PM_* abstraction as the same
information is already carried in SYS_POWER_STATE_*.

Signed-off-by: Piotr Zięcik <piotr.ziecik@nordicsemi.no>
2019-02-08 09:07:00 -05:00
Andrew Boie
41f6011c36 userspace: remove APPLICATION_MEMORY feature
This was never a long-term solution, more of a gross hack
to get test cases working until we could figure out a good
end-to-end solution for memory domains that generated
appropriate linker sections. Now that we have this with
the app shared memory feature, and have converted all tests
to remove it, delete this feature.

To date all userspace APIs have been tagged as 'experimental'
which sidesteps deprecation policies.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
525065dd8b tests: convert to use app shared memory
CONFIG_APPLICATION_MEMORY was a stopgap feature that is
being removed from the kernel. Convert tests and samples
to use the application shared memory feature instead,
in most cases using the domain set up by ztest.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
84c808918b tests: set CONFIG_MAX_THREAD_BYTES for a few tests
Some tests instantiate a lot of thread objects. These
were not tagged with __kernel, and
CONFIG_APPLICATION_MEMORY was enabled, so the kernel was
not adding them to the kernel object database.

However, with CONFIG_APPLICATION_MEMORY disabled, this
overflowed the default max number of thread objects (16).
Increase the max to 32 for these particular tests.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
62a6f87139 tests: remove app_memory test
CONFIG_APPLICATION_MEMORY is being removed from
Zephyr.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
889b2377ef tests: userspace: remove extra_sections
This is unnecessary.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Andrew Boie
8bfd8457ea tests: mem_protect: fix Kconfig
We want CONFIG_APPLICATION_MEMORY specifically disabled
for this test, but it was being transitively selected by
CONFIG_TEST_USERSPACE which defaults to on for CONFIG_TEST.

Turn it off so that disabling application memory in the
config actually has an effect.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-08 07:04:30 -05:00
Kumar Gala
5ef93aa639 tests: kernel: interrupt: Exclude platforms test isnt valid on
The test assumes that the last to IRQ numbers will be free, this isn't a
valid assumption and now that we detect multiple ISRs registering for
the some IRQ line, we see failures because of this assumption on some
platforms.  Exclude those platforms from this test for the time being.

Signed-off-by: Kumar Gala <kumar.gala@linaro.org>
2019-02-07 15:58:43 -05:00
Anas Nashif
177df596d3 tests: irq_offload: remove irq_offload test
This is now part of kernel/common.
Forgot to remove it in d0bcf27eab

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-02-06 10:10:42 -05:00
Andrew Boie
2d9bbdf5f3 x86: remove support for non-PAE page tables
PAE tables introduce the NX bit which is very desirable
from a security perspetive, back in 1995.

PAE tables are larger, but we are not targeting x86 memory
protection for RAM constrained devices.

Remove the old style 32-bit tables to make the x86 port
easier to maintain.

Renamed some verbosely named data structures, and fixed
incorrect number of entries for the page directory
pointer table.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-02-05 20:51:21 -08:00
Anas Nashif
d0bcf27eab tests: common: move irq_offload test to common
This is a small test that can be easily integrated into the common set
of tests we have already.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-02-05 20:40:07 -05:00
Anas Nashif
d35f1150f3 tests: common: move boot_delay test to common
This is a small test that can be easily integrated into the common set
of tests we have already.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-02-05 20:40:07 -05:00
Anas Nashif
3f27247617 tests: common: move errno test to common
This is a small test that can be easily integrated into the common set
of tests we have already.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-02-05 20:40:07 -05:00
Ioannis Glaropoulos
2782a00a00 tests: kernel: interrupt: group IRQ line number selection together
This commit moves the definition of IRQ_LINE(..) macro from
interrupt.h into nested_irq.c, and adds some inline comments
documenting the use of it.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2019-02-04 20:50:59 -05:00
Andy Ross
54dd1a3cf1 tests/threads/thread_apis: Add test for CPU mask API
Very simple test for thread CPU masks.  While this is a SMP feature,
the implementation doesn't actually depend on SMP so we can test it
right here in the thread_apis test.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 21:37:24 -05:00
Andy Ross
eda4c027da misc/dlist: Swap insertion API for a faster one
The sys_dlist_insert_*() functions had a behavior where a NULL
argument for the insertion position to sys_dlist_insert_after/before()
was interpreted as "the end of the list".  We never used that
convention (except in one spot internal to dlist.h which was not
itself used anywhere), and of course already have an API for appending
and prepending to a list.

In practice this was a performance disaster.  The NULL check is
virtually never provable statically by the compiler, so that test and
branch is present always.  And worse, the check and call to another
function was pushing this beyond the complexity limit for gcc to
inline a function (at -Os optimization anyway), forcing us to use
function calls for what should be a ~8 instruction sequence.  The
upshot is that dlist insertions were 2-3x slower than they needed to
be.

Deprecate these older APIs and introduce a new sys_dlist_insert() call
which can be much better optimized.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-02-01 15:57:21 -05:00
Anas Nashif
a93651085e boards: remove pulpino board
This board is unmaintained and unsupported. It is not known to work and
has lots of conditional code across the tree that makes code
unmaintainable.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-01-31 22:47:18 -05:00
Andrew Boie
c253a686bf app_shmem: auto-initialize partitions
There are no longer per-partition initialization functions.
Instead, we iterate over all of them at boot to set up the
derived k_mem_partitions properly.

Some ARC-specific hacks that should never have been applied
have been removed from the userspace test.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-01-30 23:15:51 -05:00
Andrew Boie
85e1fcb02a app_shmem: renamespace and document
The public APIs for application shared memory are now
properly documented and conform to zephyr naming
conventions.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-01-30 15:43:58 -08:00
Andrew Boie
f278f31da1 app_shmem: delete parallel API for domains
The app shared memory macros for declaring domains provide
no value, despite the stated intentions.

Just declare memory domains using the standard APIs for it.

To support this, symbols declared for app shared memory
partitions now are struct k_mem_partition, which can be
passed to the k_mem_domain APIs as normal, instead of the
app_region structs which are of no interest to the end
user.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-01-29 11:11:49 -08:00
Johan Hedberg
e405fc41f2 atomic: Add atomic_set_bit_to() API
Several places in the code have constructions like this:

	if (bool_variable) {
		atomic_set_bit(flags, FLAG);
	} else {
		atomic_clear_bit(flags, FLAG);
	}

To reduce the amount of code for such situations, introduce a new
atomic_set_bit_to() helper which lets you condense the above five
lines to a single one:

	atomic_set_bit_to(flags, FLAG, bool_variable);

Signed-off-by: Johan Hedberg <johan.hedberg@intel.com>
2019-01-25 17:35:44 -05:00
Peter A. Bigot
d40b8ce1fb sys: dlist: Add sys_dnode_is_linked
The original implementation allows a list to be corrupted by list
operations on the removed node.  Existing code attempts to avoid this by
using external state to determine whether a node is in a list, but this
is fragile and fails when the state that holds the flag value is changed
after the node is removed, e.g. in preparation for re-using the node.

Follow Linux in invalidating the link pointers in a removed node.  Add
API so that detection of particpation in a list is available at the node
abstraction.

This solution relies on the following steady-state invariants:
* A node (as opposed to a list) will never be adjacent to itself in a
  list;
* The next and prev pointers of a node are always either both null or
  both non-null.

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-23 20:46:49 +01:00
Andrzej Głąbek
37fbff6179 drivers: nrf: Adjust clock_control and timer drivers for nRF9160
Minor adjustments are done to the nRF clock_control and rtc_timer
drivers to make them usable on nRF9160 as well.
The arm_irq_vector_table test code is modified only because it uses
the function that has been renamed in the nrf_rtc_timer driver.

Signed-off-by: Andrzej Głąbek <andrzej.glabek@nordicsemi.no>
2019-01-21 10:13:34 +01:00
Andrew Boie
970758408b printk: don't print incorrect 64-bit integers
printk is supposed to be very lean, but should at least not
print garbage values. Now when a 64-bit integral value is
passed in to be printed, 'ERR' will be reported if it doesn't
fit in 32-bits instead of truncating it.

The printk documentation was slightly out of date, this has been
updated.

Fixes: #7179

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-01-18 08:23:15 -08:00
Andrew Boie
dfeed647f5 printk: fix printing 64-bit hex values
These were being truncated to 32-bits, and only 8
hex digits were supported.

An extraneous printk() at the beginning of the test
which was not being tested in any way has been removed.

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2019-01-17 14:59:03 -06:00
Peter A. Bigot
bfad9721d2 kernel: remove k_alert API
This API was used in only one place in non-test code.  See whether we
can remove it.

Closes #12232

Signed-off-by: Peter A. Bigot <pab@pabigot.com>
2019-01-16 21:34:07 -05:00
Adithya Baglody
25572f3b85 tests: Dont run coverage for select test cases.
Disabled the CONFIG_COVERAGE for benchmarks and other tests.
This is needed because it interferes with normal behavior of the
test case.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2019-01-16 06:12:33 -05:00
Adithya Baglody
516bf34df5 tests: Increase the stack size by CONFIG_TEST_EXTRA_STACKSIZE.
These tests need to use stack size as a function of
CONFIG_TEST_EXTRA_STACKSIZE. These test will fail when
CONFIG_COVERAGE is enabled.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-16 06:12:33 -05:00
Andy Ross
f033d542ad tests: samples: Disable newlib tests on x86_64
This builds with a host compiler, not one from the SDK, and so no
newlib library is available.  There is work to enable newlib detection
at and above the cmake level.  This patch can be reverted when that
lands.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Andy Ross
0cc362f873 tests/kernel: Simplify timer spinning
There is actually nothing wrong with this test code idiom.  But it's
tickling a qemu emulator bug with the hpet driver and x86_64[1].  The
rapidly spinning calls to k_uptime_get_32() need to disable
interrupts, read timer hardware state and enable them.  Something goes
wrong in qemu with this process and the timer interrupt gets lost.
The counter blows right past the comparator without delivering its
interrupt, and thus the interrupt won't be delivered until the counter
is next reset in idle after exit from the busy loop, which is
obviously too late to interrupt the timeslicing thread.

Just replace the loops with a single call to k_busy_wait().  The
resulting code ends up being much simpler anyway.  An added bonus is
that we can remove the special case handling for native_posix (which
was an entirely unrelated thing, but with a similar symptom).

[1] But oddly not the same emulated hardware running with the same
driver under the same qemu binary when used with a 32 bit kernel.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Andy Ross
870e8188a8 tests/kernel/sched/schedule_api: Honor TEST_EXTRA_STACKSIZE
Stacks created by tests should add this amount so thread-hungry
architectures can tune it.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Andy Ross
31e79a791e tests/kernel/mem_protect/stackprot: Whitelist x86_64
This architecture doesn't support stack canaries.  In fact the gcc
-fstack-protect features don't seem to be working at all.  I'm
guessing it's an x32 ABI mismatch?

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Andy Ross
b69d0da82d arch/x86_64: New architecture added
This patch adds a x86_64 architecture and qemu_x86_64 board to Zephyr.
Only the basic architecture support needed to run 64 bit code is
added; no drivers are added, though a low-level console exists and is
wired to printk().

The support is built on top of a "X86 underkernel" layer, which can be
built in isolation as a unit test on a Linux host.

Limitations:

+ Right now the SDK lacks an x86_64 toolchain.  The build will fall
  back to a host toolchain if it finds no cross compiler defined,
  which is tested to work on gcc 8.2.1 right now.

+ No x87/SSE/AVX usage is allowed.  This is a stronger limitation than
  other architectures where the instructions work from one thread even
  if the context switch code doesn't support it.  We are passing
  -no-sse to prevent gcc from automatically generating SSE
  instructions for non-floating-point purposes, which has the side
  effect of changing the ABI.  Future work to handle the FPU registers
  will need to be combined with an "application" ABI distinct from the
  kernel one (or just to require USERSPACE).

+ Paging is enabled (it has to be in long mode), but is a 1:1 mapping
  of all memory.  No MMU/USERSPACE support yet.

+ We are building with -mno-red-zone for stack size reasons, but this
  is a valuable optimization.  Enabling it requires automatic stack
  switching, which requires a TSS, which means it has to happen after
  MMU support.

+ The OS runs in 64 bit mode, but for compatibility reasons is
  compiled to the 32 bit "X32" ABI.  So while the full 64 bit
  registers and instruction set are available, C pointers are 32 bits
  long and Zephyr is constrained to run in the bottom 4G of memory.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Andy Ross
0f075753b8 tests/kernel/threads/thread_apis: Fix include hygine
These files were relying on _thread_essential_set() from
kernel_internal.h, but not including it directly.  New architectures
won't transitively include things the same way.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-11 15:18:52 -05:00
Adithya Baglody
4c1667fbfa tests: Updated all the tests which use k_thread_access_grant.
With the new implementation we do not need a NULL terminated list
of kobjects. Therefore the list will only contain valid entries
of kobjects.

Signed-off-by: Adithya Baglody <adithya.nagaraj.baglody@intel.com>
2019-01-03 12:35:14 -08:00
Andy Ross
b748d5219a tests/kernel/timer_api: Synchronize racy subtest
The test_timer_periodicity test is racy and subject to initial state
bugs.  The operation of that test is to:

1. Start a timer with a known period
2. Take the current time with k_uptime_get()
3. Wait for the timer to fire with k_timer_status_sync()
4. Check that the current time minus start time is the period

But that's wrong, because a tick expiring between any of the first
three steps is going to skew the math (i.e. the timer will have
started on a different tick than the "start time").

And taking an interrupt lock around the process can't fix the issue,
because in the tickless world we live in k_uptime_get() is actually a
realtime quanity based on a hardware counter and doesn't rely on
interrupt delivery.

Instead, use another timer object to synchronize the test start to a
driver tick, ensuring that even if the race is unfixable the initial
conditions are always correct.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
2019-01-03 12:29:02 -05:00
Anas Nashif
5060ca6a30 cmake: increase minimal required version to 3.13.1
Move to latest cmake version with many bug fixes and enhancements.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
Signed-off-by: Carles Cufi <carles.cufi@nordicsemi.no>
2019-01-03 11:51:29 -05:00
Anas Nashif
74a74bb6b8 power: rename api sys_soc -> sys_
sys_soc is just redundant, just call APIs with sys_*.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-12-28 16:16:28 -05:00
Anas Nashif
9151fbebf2 power: rename APIs and removing leading _
Remove leading underscore from PM APIs. _ was used for internal APIs.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
2018-12-28 16:16:28 -05:00
Spoorthi K
e62e54bdcd tests: interrupt: Change IRQ priorities in test
Keeping IRQ0 priority as 1 and IRQ1 priority as 0
so that system timer which of priority 0 in ARC
will be interrupted by IRQ1 of same priority.
In ARM, system timer is of priority 1, hence
making ISR0 priority as 2 and ISR1 priority as 1.
Thus system timer will always be interrupted by
ISR1 in both the architectures.

Fixes: #12147

Signed-off-by: Spoorthi K <spoorthi.k@intel.com>
2018-12-21 21:04:36 +01:00
Spoorthi K
82f73bd5e3 tests: nested_irq: Fix k_busy_wait usage and interrupt priority
k_busy_wait() call used in test expects time in us, but the test
is specifying wait in ms.

Also the test fails on NRF5 platform as the test hardcodes the
interrupts priority to 0 and 1 and assumes system timer to be of
priority 0 which is not the case in NRF5 platforms as per
@pizi-nordic where system timer is at priority 1. Hence changing
test interrupts to 1 and 2.

Signed-off-by: Spoorthi K <spoorthi.k@intel.com>
2018-12-11 13:36:52 -05:00
Andrew Boie
5f793cc25c tests: mem_pool_api: reduce stack usage
Some arches were getting a stack overflow in ISR context.

Fixes: #9777

Signed-off-by: Andrew Boie <andrew.p.boie@intel.com>
2018-12-07 20:09:47 -05:00
Ioannis Glaropoulos
e8e5c388b4 test: kernel: userspace: add include for arm core mpu
In the wake of dfa7a354ff2a31fea8614b3876b051aadc30b242, where
the inclusions for MPU APIs were clean-up, we need to directly
include arm_core_mpu_dev.h in the userspace test suite, which
invokes arm_core_mpu_enable/disable(), directly. The same is
already done for ARC MPU.

Signed-off-by: Ioannis Glaropoulos <Ioannis.Glaropoulos@nordicsemi.no>
2018-12-05 15:15:07 -05:00