kernel: New timeslicing implementation

Instead of checking every time we hit the low-level context switch
path to see if the new thread has a "partner" with which it needs to
share time, just run the slice timer always and reset it from the
scheduler at the points where it has already decided a switch needs to
happen.  In TICKLESS_KERNEL situations, we pay the cost of extra timer
interrupts at ~10Hz or whatever, which is low (note also that this
kind of regular wakeup architecture is required on SMP anyway so the
scheduler can "notice" threads scheduled by other CPUs).  Advantages:

1. Much simpler logic.  Significantly smaller code.  No variance or
   dependence on tickless modes or timer driver (beyond setting a
   simple timeout).

2. No arch-specific assembly integration with _Swap() needed

3. Better performance on many workloads, as the accounting now happens
   at most once per timer interrupt (~5 Hz) and true rescheduling and
   not on every unrelated context switch and interrupt return.

4. It's SMP-safe.  The previous scheme kept the slice ticks as a
   global variable, which was an unnoticed bug.

Signed-off-by: Andy Ross <andrew.j.ross@intel.com>
This commit is contained in:
Andy Ross 2018-09-25 10:56:09 -07:00 committed by Anas Nashif
commit 9098a45c84
9 changed files with 60 additions and 168 deletions

View file

@ -96,6 +96,11 @@ struct _cpu {
/* one assigned idle thread per CPU */
struct k_thread *idle_thread;
#ifdef CONFIG_TIMESLICING
/* number of ticks remaining in current time slice */
int slice_ticks;
#endif
u8_t id;
#ifdef CONFIG_SMP