zephyr/arch/x86/core/cpuhalt.c
Gerard Marull-Paretas 79e6b0e0f6 includes: prefer <zephyr/kernel.h> over <zephyr/zephyr.h>
As of today <zephyr/zephyr.h> is 100% equivalent to <zephyr/kernel.h>.
This patch proposes to then include <zephyr/kernel.h> instead of
<zephyr/zephyr.h> since it is more clear that you are including the
Kernel APIs and (probably) nothing else. <zephyr/zephyr.h> sounds like a
catch-all header that may be confusing. Most applications need to
include a bunch of other things to compile, e.g. driver headers or
subsystem headers like BT, logging, etc.

The idea of a catch-all header in Zephyr is probably not feasible
anyway. Reason is that Zephyr is not a library, like it could be for
example `libpython`. Zephyr provides many utilities nowadays: a kernel,
drivers, subsystems, etc and things will likely grow. A catch-all header
would be massive, difficult to keep up-to-date. It is also likely that
an application will only build a small subset. Note that subsystem-level
headers may use a catch-all approach to make things easier, though.

NOTE: This patch is **NOT** removing the header, just removing its usage
in-tree. I'd advocate for its deprecation (add a #warning on it), but I
understand many people will have concerns.

Signed-off-by: Gerard Marull-Paretas <gerard.marull@nordicsemi.no>
2022-09-05 16:31:47 +02:00

45 lines
1 KiB
C

/*
* Copyright (c) 2011-2015 Wind River Systems, Inc.
* SPDX-License-Identifier: Apache-2.0
*/
#include <zephyr/kernel.h>
#include <zephyr/tracing/tracing.h>
#include <zephyr/arch/cpu.h>
__pinned_func
void arch_cpu_idle(void)
{
sys_trace_idle();
__asm__ volatile (
"sti\n\t"
"hlt\n\t");
}
__pinned_func
void arch_cpu_atomic_idle(unsigned int key)
{
sys_trace_idle();
__asm__ volatile (
"sti\n\t"
/*
* The following statement appears in "Intel 64 and IA-32
* Architectures Software Developer's Manual", regarding the 'sti'
* instruction:
*
* "After the IF flag is set, the processor begins responding to
* external, maskable interrupts after the next instruction is
* executed."
*
* Thus the IA-32 implementation of arch_cpu_atomic_idle() will
* atomically re-enable interrupts and enter a low-power mode.
*/
"hlt\n\t");
/* restore interrupt lockout state before returning to caller */
if ((key & 0x200U) == 0U) {
__asm__ volatile("cli");
}
}