arch/xtensa: clean up arch_cpu_idle function

Some workarounds were introduced for intel cavs2.5 platform bring up.
It is not general so move them to platform code.

Signed-off-by: Rander Wang <rander.wang@intel.com>
This commit is contained in:
Rander Wang 2023-11-03 13:53:54 +08:00 committed by Carles Cufí
commit 954901296c
3 changed files with 14 additions and 49 deletions

View file

@ -50,18 +50,6 @@ config XTENSA_ENABLE_BACKTRACE
help help
Enable this config option to print backtrace on panic exception Enable this config option to print backtrace on panic exception
config XTENSA_CPU_IDLE_SPIN
bool "Use busy loop for k_cpu_idle"
help
Use a spin loop instead of WAITI for the CPU idle state.
config XTENSA_WAITI_BUG
bool "Workaround sequence for WAITI bug on LX6"
help
SOF traditionally contains this workaround on its ADSP
platforms which prefixes a WAITI entry with 128 NOP
instructions followed by an ISYNC and EXTW.
config XTENSA_SMALL_VECTOR_TABLE_ENTRY config XTENSA_SMALL_VECTOR_TABLE_ENTRY
bool "Workaround for small vector table entries" bool "Workaround for small vector table entries"
help help

View file

@ -6,48 +6,13 @@
#include <zephyr/toolchain.h> #include <zephyr/toolchain.h>
#include <zephyr/tracing/tracing.h> #include <zephyr/tracing/tracing.h>
/* xt-clang removes any NOPs more than 8. So we need to set #ifndef CONFIG_ARCH_CPU_IDLE_CUSTOM
* no optimization to avoid those NOPs from being removed.
*
* This function is simply enough and full of hand written
* assembly that optimization is not really meaningful
* anyway. So we can skip optimization unconditionally.
* Re-evalulate its use and add #ifdef if this assumption
* is no longer valid.
*/
__no_optimization
void arch_cpu_idle(void) void arch_cpu_idle(void)
{ {
sys_trace_idle(); sys_trace_idle();
/* Just spin forever with interrupts unmasked, for platforms
* where WAITI can't be used or where its behavior is
* complicated (Intel DSPs will power gate on idle entry under
* some circumstances)
*/
if (IS_ENABLED(CONFIG_XTENSA_CPU_IDLE_SPIN)) {
__asm__ volatile("rsil a0, 0");
__asm__ volatile("loop_forever: j loop_forever");
return;
}
/* Cribbed from SOF: workaround for a bug in some versions of
* the LX6 IP. Preprocessor ugliness avoids the need to
* figure out how to get the compiler to unroll a loop.
*/
if (IS_ENABLED(CONFIG_XTENSA_WAITI_BUG)) {
#define NOP4 __asm__ volatile("nop; nop; nop; nop");
#define NOP32 NOP4 NOP4 NOP4 NOP4 NOP4 NOP4 NOP4 NOP4
#define NOP128() NOP32 NOP32 NOP32 NOP32
NOP128();
#undef NOP128
#undef NOP32
#undef NOP4
__asm__ volatile("isync; extw");
}
__asm__ volatile ("waiti 0"); __asm__ volatile ("waiti 0");
} }
#endif
void arch_cpu_atomic_idle(unsigned int key) void arch_cpu_atomic_idle(unsigned int key)
{ {

View file

@ -113,4 +113,16 @@ config ADSP_IMR_CONTEXT_SAVE
entering D3 state. Later this context can be used to FW restore entering D3 state. Later this context can be used to FW restore
when Host power up DSP again. when Host power up DSP again.
config XTENSA_CPU_IDLE_SPIN
bool "Use busy loop for k_cpu_idle"
help
Use a spin loop instead of WAITI for the CPU idle state.
config XTENSA_WAITI_BUG
bool "Workaround sequence for WAITI bug on LX6"
help
SOF traditionally contains this workaround on its ADSP
platforms which prefixes a WAITI entry with 128 NOP
instructions followed by an ISYNC and EXTW.
endif # SOC_FAMILY_INTEL_ADSP endif # SOC_FAMILY_INTEL_ADSP