kernel: New timeslicing implementation

Instead of checking every time we hit the low-level context switch
path to see if the new thread has a "partner" with which it needs to
share time, just run the slice timer always and reset it from the
scheduler at the points where it has already decided a switch needs to
happen.  In TICKLESS_KERNEL situations, we pay the cost of extra timer
interrupts at ~10Hz or whatever, which is low (note also that this
kind of regular wakeup architecture is required on SMP anyway so the
scheduler can "notice" threads scheduled by other CPUs).  Advantages:

1. Much simpler logic.  Significantly smaller code.  No variance or
   dependence on tickless modes or timer driver (beyond setting a
   simple timeout).

2. No arch-specific assembly integration with _Swap() needed

3. Better performance on many workloads, as the accounting now happens
   at most once per timer interrupt (~5 Hz) and true rescheduling and
   not on every unrelated context switch and interrupt return.

4. It's SMP-safe.  The previous scheme kept the slice ticks as a
   global variable, which was an unnoticed bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit is contained in:
Andy Ross 2018-09-25 10:56:09 -07:00 committed by Anas Nashif
commit 9098a45c84
9 changed files with 60 additions and 168 deletions

View file

@ -48,10 +48,7 @@ void *_get_next_switch_handle(void *interrupted);
struct k_thread *_find_first_thread_to_unpend(_wait_q_t *wait_q,
struct k_thread *from);
void idle(void *a, void *b, void *c);
#ifdef CONFIG_TIMESLICING
void z_reset_timeslice(void);
#endif
void z_time_slice(int ticks);
/* find which one is the next thread to run */
/* must be called with interrupts locked */
@ -227,13 +224,7 @@ static inline void _ready_thread(struct k_thread *thread)
_add_thread_to_ready_q(thread);
}
#if defined(CONFIG_TICKLESS_KERNEL) && !defined(CONFIG_SMP) && \
defined(CONFIG_TIMESLICING)
z_reset_timeslice();
#endif
sys_trace_thread_ready(thread);
}
static inline void _ready_one_thread(_wait_q_t *wq)