kernel/k_timer: Robustify vs. late interrupts
The k_timer utility was written to assume that the kernel timeout handler would never be delayed by more than a tick, so it can naively reschedule the next interrupt with a simple delay. Unfortunately real platforms have glitchy hardware and high tick rates, and on intel_adsp we're seeing this promise being broken in some circumstances. It's probably not a good idea to try to plumb the timer driver interface up into the IPC layer to do this correction, but thankfully the existing absolute timeout API provides the tools we need (though it does require that CONFIG_TIMEOUT_64BIT be enabled). Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit is contained in:
parent
9b7a36099f
commit
7a59cebf12
1 changed files with 18 additions and 1 deletions
|
@ -32,8 +32,25 @@ void z_timer_expiration_handler(struct _timeout *t)
|
|||
*/
|
||||
if (!K_TIMEOUT_EQ(timer->period, K_NO_WAIT) &&
|
||||
!K_TIMEOUT_EQ(timer->period, K_FOREVER)) {
|
||||
k_timeout_t next = timer->period;
|
||||
|
||||
#ifdef CONFIG_TIMEOUT_64BIT
|
||||
/* Exploit the fact that uptime during a kernel
|
||||
* timeout handler reflects the time of the scheduled
|
||||
* event and not real time to get some inexpensive
|
||||
* protection against late interrupts. If we're
|
||||
* delayed for any reason, we still end up calculating
|
||||
* the next expiration as a regular stride from where
|
||||
* we "should" have run. Requires absolute timeouts.
|
||||
* (Note offset by one: we're nominally at the
|
||||
* beginning of a tick, so need to defeat the "round
|
||||
* down" behavior on timeout addition).
|
||||
*/
|
||||
next = K_TIMEOUT_ABS_TICKS(k_uptime_ticks() + 1
|
||||
+ timer->period.ticks);
|
||||
#endif
|
||||
z_add_timeout(&timer->timeout, z_timer_expiration_handler,
|
||||
timer->period);
|
||||
next);
|
||||
}
|
||||
|
||||
/* update timer's status */
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue