2015-11-06 20:37:10 +01:00
|
|
|
/*
|
|
|
|
* Copyright (c) 2011-2015 Wind River Systems, Inc.
|
2017-01-19 02:01:01 +01:00
|
|
|
* SPDX-License-Identifier: Apache-2.0
|
2015-11-06 20:37:10 +01:00
|
|
|
*/
|
|
|
|
|
includes: prefer <zephyr/kernel.h> over <zephyr/zephyr.h>
As of today <zephyr/zephyr.h> is 100% equivalent to <zephyr/kernel.h>.
This patch proposes to then include <zephyr/kernel.h> instead of
<zephyr/zephyr.h> since it is more clear that you are including the
Kernel APIs and (probably) nothing else. <zephyr/zephyr.h> sounds like a
catch-all header that may be confusing. Most applications need to
include a bunch of other things to compile, e.g. driver headers or
subsystem headers like BT, logging, etc.
The idea of a catch-all header in Zephyr is probably not feasible
anyway. Reason is that Zephyr is not a library, like it could be for
example `libpython`. Zephyr provides many utilities nowadays: a kernel,
drivers, subsystems, etc and things will likely grow. A catch-all header
would be massive, difficult to keep up-to-date. It is also likely that
an application will only build a small subset. Note that subsystem-level
headers may use a catch-all approach to make things easier, though.
NOTE: This patch is **NOT** removing the header, just removing its usage
in-tree. I'd advocate for its deprecation (add a #warning on it), but I
understand many people will have concerns.
Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-08-25 09:58:46 +02:00
|
|
|
#include <zephyr/kernel.h>
|
2022-05-06 10:49:15 +02:00
|
|
|
#include <zephyr/tracing/tracing.h>
|
|
|
|
#include <zephyr/arch/cpu.h>
|
2015-11-06 20:37:10 +01:00
|
|
|
|
2021-02-26 01:42:53 +01:00
|
|
|
__pinned_func
|
2019-11-07 21:43:29 +01:00
|
|
|
void arch_cpu_idle(void)
|
2015-11-06 20:37:10 +01:00
|
|
|
{
|
2019-09-19 09:25:19 +02:00
|
|
|
sys_trace_idle();
|
2015-11-06 20:37:10 +01:00
|
|
|
__asm__ volatile (
|
|
|
|
"sti\n\t"
|
|
|
|
"hlt\n\t");
|
|
|
|
}
|
|
|
|
|
2021-02-26 01:42:53 +01:00
|
|
|
__pinned_func
|
2019-11-07 21:43:29 +01:00
|
|
|
void arch_cpu_atomic_idle(unsigned int key)
|
2015-11-06 20:37:10 +01:00
|
|
|
{
|
2019-09-19 09:25:19 +02:00
|
|
|
sys_trace_idle();
|
2015-11-06 20:37:10 +01:00
|
|
|
|
|
|
|
__asm__ volatile (
|
|
|
|
"sti\n\t"
|
|
|
|
/*
|
|
|
|
* The following statement appears in "Intel 64 and IA-32
|
|
|
|
* Architectures Software Developer's Manual", regarding the 'sti'
|
|
|
|
* instruction:
|
|
|
|
*
|
|
|
|
* "After the IF flag is set, the processor begins responding to
|
|
|
|
* external, maskable interrupts after the next instruction is
|
|
|
|
* executed."
|
|
|
|
*
|
2019-11-07 21:43:29 +01:00
|
|
|
* Thus the IA-32 implementation of arch_cpu_atomic_idle() will
|
2015-11-06 20:37:10 +01:00
|
|
|
* atomically re-enable interrupts and enter a low-power mode.
|
|
|
|
*/
|
|
|
|
"hlt\n\t");
|
|
|
|
|
|
|
|
/* restore interrupt lockout state before returning to caller */
|
2019-03-27 02:57:45 +01:00
|
|
|
if ((key & 0x200U) == 0U) {
|
2015-11-06 20:37:10 +01:00
|
|
|
__asm__ volatile("cli");
|
|
|
|
}
|
|
|
|
}
|