kernel: rename 'dumb' scheduler and simply call it 'simple'

Improve naming of the scheduler and call it what it is: simple. Using
'dumb' for the default scheduler algorithm in Zephyr is a bad idea.

Signed-off-by: Anas Nashif <anas.nashif@intel.com>
This commit is contained in:
Anas Nashif 2025-03-12 06:09:27 -04:00 committed by Benjamin Cabé
commit f29ae72d79
21 changed files with 75 additions and 57 deletions

View file

@ -10,6 +10,6 @@ CONFIG_SYS_CLOCK_HW_CYCLES_PER_SEC=25000000
CONFIG_TEST_RANDOM_GENERATOR=y CONFIG_TEST_RANDOM_GENERATOR=y
CONFIG_X86_MMU=n CONFIG_X86_MMU=n
CONFIG_DEBUG_INFO=y CONFIG_DEBUG_INFO=y
CONFIG_SCHED_DUMB=y CONFIG_SCHED_SIMPLE=y
CONFIG_WAITQ_DUMB=y CONFIG_WAITQ_SIMPLE=y
CONFIG_X86_VERY_EARLY_CONSOLE=n CONFIG_X86_VERY_EARLY_CONSOLE=n

View file

@ -61,7 +61,7 @@ The kernel can be built with one of several choices for the ready queue
implementation, offering different choices between code size, constant factor implementation, offering different choices between code size, constant factor
runtime overhead and performance scaling when many threads are added. runtime overhead and performance scaling when many threads are added.
* Simple linked-list ready queue (:kconfig:option:`CONFIG_SCHED_DUMB`) * Simple linked-list ready queue (:kconfig:option:`CONFIG_SCHED_SIMPLE`)
The scheduler ready queue will be implemented as a simple unordered list, with The scheduler ready queue will be implemented as a simple unordered list, with
very fast constant time performance for single threads and very low code size. very fast constant time performance for single threads and very low code size.
@ -97,7 +97,7 @@ runtime overhead and performance scaling when many threads are added.
list of threads. list of threads.
Typical applications with small numbers of runnable threads probably want the Typical applications with small numbers of runnable threads probably want the
DUMB scheduler. simple scheduler.
The wait_q abstraction used in IPC primitives to pend threads for later wakeup The wait_q abstraction used in IPC primitives to pend threads for later wakeup
@ -108,13 +108,13 @@ the same options.
When selected, the wait_q will be implemented with a balanced tree. Choose When selected, the wait_q will be implemented with a balanced tree. Choose
this if you expect to have many threads waiting on individual primitives. this if you expect to have many threads waiting on individual primitives.
There is a ~2kb code size increase over :kconfig:option:`CONFIG_WAITQ_DUMB` (which may There is a ~2kb code size increase over :kconfig:option:`CONFIG_WAITQ_SIMPLE` (which may
be shared with :kconfig:option:`CONFIG_SCHED_SCALABLE`) if the red/black tree is not be shared with :kconfig:option:`CONFIG_SCHED_SCALABLE`) if the red/black tree is not
used elsewhere in the application, and pend/unpend operations on "small" used elsewhere in the application, and pend/unpend operations on "small"
queues will be somewhat slower (though this is not generally a performance queues will be somewhat slower (though this is not generally a performance
path). path).
* Simple linked-list wait_q (:kconfig:option:`CONFIG_WAITQ_DUMB`) * Simple linked-list wait_q (:kconfig:option:`CONFIG_WAITQ_SIMPLE`)
When selected, the wait_q will be implemented with a doubly-linked list. When selected, the wait_q will be implemented with a doubly-linked list.
Choose this if you expect to have only a few threads blocked on any single Choose this if you expect to have only a few threads blocked on any single

View file

@ -118,7 +118,7 @@ traversed in full. The kernel does not keep a per-CPU run queue.
That means that the performance benefits from the That means that the performance benefits from the
:kconfig:option:`CONFIG_SCHED_SCALABLE` and :kconfig:option:`CONFIG_SCHED_MULTIQ` :kconfig:option:`CONFIG_SCHED_SCALABLE` and :kconfig:option:`CONFIG_SCHED_MULTIQ`
scheduler backends cannot be realized. CPU mask processing is scheduler backends cannot be realized. CPU mask processing is
available only when :kconfig:option:`CONFIG_SCHED_DUMB` is the selected available only when :kconfig:option:`CONFIG_SCHED_SIMPLE` is the selected
backend. This requirement is enforced in the configuration layer. backend. This requirement is enforced in the configuration layer.
SMP Boot Process SMP Boot Process

View file

@ -56,6 +56,11 @@ Removed APIs and options
* Removed the deprecated ``include/zephyr/net/buf.h`` header file. * Removed the deprecated ``include/zephyr/net/buf.h`` header file.
Deprecated APIs and options Deprecated APIs and options
* The scheduler Kconfig options CONFIG_SCHED_DUMB and CONFIG_WAITQ_DUMB were
renamed and deprecated. Use :kconfig:option:`CONFIG_SCHED_SIMPLE` and
:kconfig:option:`CONFIG_WAITQ_SIMPLE` instead.
=========================== ===========================
New APIs and options New APIs and options

View file

@ -133,7 +133,7 @@ struct _ready_q {
struct k_thread *cache; struct k_thread *cache;
#endif #endif
#if defined(CONFIG_SCHED_DUMB) #if defined(CONFIG_SCHED_SIMPLE)
sys_dlist_t runq; sys_dlist_t runq;
#elif defined(CONFIG_SCHED_SCALABLE) #elif defined(CONFIG_SCHED_SCALABLE)
struct _priq_rb runq; struct _priq_rb runq;

View file

@ -121,14 +121,14 @@ config SCHED_DEADLINE
config SCHED_CPU_MASK config SCHED_CPU_MASK
bool "CPU mask affinity/pinning API" bool "CPU mask affinity/pinning API"
depends on SCHED_DUMB depends on SCHED_SIMPLE
help help
When true, the application will have access to the When true, the application will have access to the
k_thread_cpu_mask_*() APIs which control per-CPU affinity masks in k_thread_cpu_mask_*() APIs which control per-CPU affinity masks in
SMP mode, allowing applications to pin threads to specific CPUs or SMP mode, allowing applications to pin threads to specific CPUs or
disallow threads from running on given CPUs. Note that as currently disallow threads from running on given CPUs. Note that as currently
implemented, this involves an inherent O(N) scaling in the number of implemented, this involves an inherent O(N) scaling in the number of
idle-but-runnable threads, and thus works only with the DUMB idle-but-runnable threads, and thus works only with the simple
scheduler (as SCALABLE and MULTIQ would see no benefit). scheduler (as SCALABLE and MULTIQ would see no benefit).
Note that this setting does not technically depend on SMP and is Note that this setting does not technically depend on SMP and is
@ -297,16 +297,23 @@ endchoice # DYNAMIC_THREAD_PREFER
endif # DYNAMIC_THREADS endif # DYNAMIC_THREADS
config SCHED_DUMB
bool "Simple linked-list ready queue"
select DEPRECATED
help
Deprecated in favour of SCHED_SIMPLE.
choice SCHED_ALGORITHM choice SCHED_ALGORITHM
prompt "Scheduler priority queue algorithm" prompt "Scheduler priority queue algorithm"
default SCHED_DUMB default SCHED_SIMPLE if SCHED_DUMB
default SCHED_SIMPLE
help help
The kernel can be built with several choices for the The kernel can be built with several choices for the
ready queue implementation, offering different choices between ready queue implementation, offering different choices between
code size, constant factor runtime overhead and performance code size, constant factor runtime overhead and performance
scaling when many threads are added. scaling when many threads are added.
config SCHED_DUMB config SCHED_SIMPLE
bool "Simple linked-list ready queue" bool "Simple linked-list ready queue"
help help
When selected, the scheduler ready queue will be implemented When selected, the scheduler ready queue will be implemented
@ -339,20 +346,27 @@ config SCHED_MULTIQ
as the classic/textbook array of lists, one per priority. as the classic/textbook array of lists, one per priority.
This corresponds to the scheduler algorithm used in Zephyr This corresponds to the scheduler algorithm used in Zephyr
versions prior to 1.12. It incurs only a tiny code size versions prior to 1.12. It incurs only a tiny code size
overhead vs. the "dumb" scheduler and runs in O(1) time overhead vs. the "simple" scheduler and runs in O(1) time
in almost all circumstances with very low constant factor. in almost all circumstances with very low constant factor.
But it requires a fairly large RAM budget to store those list But it requires a fairly large RAM budget to store those list
heads, and the limited features make it incompatible with heads, and the limited features make it incompatible with
features like deadline scheduling that need to sort threads features like deadline scheduling that need to sort threads
more finely, and SMP affinity which need to traverse the list more finely, and SMP affinity which need to traverse the list
of threads. Typical applications with small numbers of runnable of threads. Typical applications with small numbers of runnable
threads probably want the DUMB scheduler. threads probably want the simple scheduler.
endchoice # SCHED_ALGORITHM endchoice # SCHED_ALGORITHM
config WAITQ_DUMB
bool "Simple linked-list wait_q"
select DEPRECATED
help
Deprecated in favour of WAITQ_SIMPLE.
choice WAITQ_ALGORITHM choice WAITQ_ALGORITHM
prompt "Wait queue priority algorithm" prompt "Wait queue priority algorithm"
default WAITQ_DUMB default WAITQ_SIMPLE if WAITQ_DUMB
default WAITQ_SIMPLE
help help
The wait_q abstraction used in IPC primitives to pend The wait_q abstraction used in IPC primitives to pend
threads for later wakeup shares the same backend data threads for later wakeup shares the same backend data
@ -365,13 +379,13 @@ config WAITQ_SCALABLE
When selected, the wait_q will be implemented with a When selected, the wait_q will be implemented with a
balanced tree. Choose this if you expect to have many balanced tree. Choose this if you expect to have many
threads waiting on individual primitives. There is a ~2kb threads waiting on individual primitives. There is a ~2kb
code size increase over WAITQ_DUMB (which may be shared with code size increase over WAITQ_SIMPLE (which may be shared with
SCHED_SCALABLE) if the rbtree is not used elsewhere in the SCHED_SCALABLE) if the rbtree is not used elsewhere in the
application, and pend/unpend operations on "small" queues application, and pend/unpend operations on "small" queues
will be somewhat slower (though this is not generally a will be somewhat slower (though this is not generally a
performance path). performance path).
config WAITQ_DUMB config WAITQ_SIMPLE
bool "Simple linked-list wait_q" bool "Simple linked-list wait_q"
help help
When selected, the wait_q will be implemented with a When selected, the wait_q will be implemented with a

View file

@ -11,15 +11,15 @@
#include <zephyr/sys/dlist.h> #include <zephyr/sys/dlist.h>
/* Dumb Scheduling */ /* Dumb Scheduling */
#if defined(CONFIG_SCHED_DUMB) #if defined(CONFIG_SCHED_SIMPLE)
#define _priq_run_init z_priq_dumb_init #define _priq_run_init z_priq_simple_init
#define _priq_run_add z_priq_dumb_add #define _priq_run_add z_priq_simple_add
#define _priq_run_remove z_priq_dumb_remove #define _priq_run_remove z_priq_simple_remove
#define _priq_run_yield z_priq_dumb_yield #define _priq_run_yield z_priq_simple_yield
# if defined(CONFIG_SCHED_CPU_MASK) # if defined(CONFIG_SCHED_CPU_MASK)
# define _priq_run_best z_priq_dumb_mask_best # define _priq_run_best z_priq_simple_mask_best
# else # else
# define _priq_run_best z_priq_dumb_best # define _priq_run_best z_priq_simple_best
# endif /* CONFIG_SCHED_CPU_MASK */ # endif /* CONFIG_SCHED_CPU_MASK */
/* Scalable Scheduling */ /* Scalable Scheduling */
#elif defined(CONFIG_SCHED_SCALABLE) #elif defined(CONFIG_SCHED_SCALABLE)
@ -43,10 +43,10 @@
#define _priq_wait_remove z_priq_rb_remove #define _priq_wait_remove z_priq_rb_remove
#define _priq_wait_best z_priq_rb_best #define _priq_wait_best z_priq_rb_best
/* Dumb Wait Queue */ /* Dumb Wait Queue */
#elif defined(CONFIG_WAITQ_DUMB) #elif defined(CONFIG_WAITQ_SIMPLE)
#define _priq_wait_add z_priq_dumb_add #define _priq_wait_add z_priq_simple_add
#define _priq_wait_remove z_priq_dumb_remove #define _priq_wait_remove z_priq_simple_remove
#define _priq_wait_best z_priq_dumb_best #define _priq_wait_best z_priq_simple_best
#endif #endif
#if defined(CONFIG_64BIT) #if defined(CONFIG_64BIT)
@ -57,7 +57,7 @@
#define TRAILING_ZEROS u32_count_trailing_zeros #define TRAILING_ZEROS u32_count_trailing_zeros
#endif /* CONFIG_64BIT */ #endif /* CONFIG_64BIT */
static ALWAYS_INLINE void z_priq_dumb_init(sys_dlist_t *pq) static ALWAYS_INLINE void z_priq_simple_init(sys_dlist_t *pq)
{ {
sys_dlist_init(pq); sys_dlist_init(pq);
} }
@ -103,7 +103,7 @@ static ALWAYS_INLINE int32_t z_sched_prio_cmp(struct k_thread *thread_1, struct
return 0; return 0;
} }
static ALWAYS_INLINE void z_priq_dumb_add(sys_dlist_t *pq, struct k_thread *thread) static ALWAYS_INLINE void z_priq_simple_add(sys_dlist_t *pq, struct k_thread *thread)
{ {
struct k_thread *t; struct k_thread *t;
@ -117,14 +117,14 @@ static ALWAYS_INLINE void z_priq_dumb_add(sys_dlist_t *pq, struct k_thread *thre
sys_dlist_append(pq, &thread->base.qnode_dlist); sys_dlist_append(pq, &thread->base.qnode_dlist);
} }
static ALWAYS_INLINE void z_priq_dumb_remove(sys_dlist_t *pq, struct k_thread *thread) static ALWAYS_INLINE void z_priq_simple_remove(sys_dlist_t *pq, struct k_thread *thread)
{ {
ARG_UNUSED(pq); ARG_UNUSED(pq);
sys_dlist_remove(&thread->base.qnode_dlist); sys_dlist_remove(&thread->base.qnode_dlist);
} }
static ALWAYS_INLINE void z_priq_dumb_yield(sys_dlist_t *pq) static ALWAYS_INLINE void z_priq_simple_yield(sys_dlist_t *pq)
{ {
#ifndef CONFIG_SMP #ifndef CONFIG_SMP
sys_dnode_t *n; sys_dnode_t *n;
@ -155,7 +155,7 @@ static ALWAYS_INLINE void z_priq_dumb_yield(sys_dlist_t *pq)
#endif #endif
} }
static ALWAYS_INLINE struct k_thread *z_priq_dumb_best(sys_dlist_t *pq) static ALWAYS_INLINE struct k_thread *z_priq_simple_best(sys_dlist_t *pq)
{ {
struct k_thread *thread = NULL; struct k_thread *thread = NULL;
sys_dnode_t *n = sys_dlist_peek_head(pq); sys_dnode_t *n = sys_dlist_peek_head(pq);
@ -167,7 +167,7 @@ static ALWAYS_INLINE struct k_thread *z_priq_dumb_best(sys_dlist_t *pq)
} }
#ifdef CONFIG_SCHED_CPU_MASK #ifdef CONFIG_SCHED_CPU_MASK
static ALWAYS_INLINE struct k_thread *z_priq_dumb_mask_best(sys_dlist_t *pq) static ALWAYS_INLINE struct k_thread *z_priq_simple_mask_best(sys_dlist_t *pq)
{ {
/* With masks enabled we need to be prepared to walk the list /* With masks enabled we need to be prepared to walk the list
* looking for one we can run * looking for one we can run

View file

@ -3,6 +3,5 @@ CONFIG_NUM_COOP_PRIORITIES=16
CONFIG_NUM_METAIRQ_PRIORITIES=0 CONFIG_NUM_METAIRQ_PRIORITIES=0
CONFIG_ERRNO=n CONFIG_ERRNO=n
CONFIG_SCHED_DUMB=y CONFIG_SCHED_SIMPLE=y
CONFIG_WAITQ_DUMB=y CONFIG_WAITQ_SIMPLE=y

View file

@ -21,7 +21,7 @@ config SCHED_IPI_SUPPORTED
default y default y
config SCHED_CPU_MASK config SCHED_CPU_MASK
default y if SCHED_DUMB default y if SCHED_SIMPLE
config MP_MAX_NUM_CPUS config MP_MAX_NUM_CPUS
default 2 default 2

View file

@ -2,7 +2,7 @@ CONFIG_TEST=y
CONFIG_NUM_PREEMPT_PRIORITIES=8 CONFIG_NUM_PREEMPT_PRIORITIES=8
CONFIG_NUM_COOP_PRIORITIES=8 CONFIG_NUM_COOP_PRIORITIES=8
# Switch these between DUMB/SCALABLE (and SCHED_MULTIQ) to measure # Switch these between SIMPLE/SCALABLE (and SCHED_MULTIQ) to measure
# different backends # different backends
CONFIG_SCHED_DUMB=y CONFIG_SCHED_SIMPLE=y
CONFIG_WAITQ_DUMB=y CONFIG_WAITQ_SIMPLE=y

View file

@ -169,7 +169,7 @@ int main(void)
} }
/* For reference, an unmodified HEAD on qemu_x86 with /* For reference, an unmodified HEAD on qemu_x86 with
* !USERSPACE and SCHED_DUMB and using -icount * !USERSPACE and SCHED_SIMPLE and using -icount
* shift=0,sleep=off,align=off, I get results of: * shift=0,sleep=off,align=off, I get results of:
* *
* unpend 132 ready 257 switch 278 pend 321 tot 988 (avg 900) * unpend 132 ready 257 switch 278 pend 321 tot 988 (avg 900)

View file

@ -2,7 +2,7 @@ Scheduling Queue Measurements
############################# #############################
A Zephyr application developer may choose between three different scheduling A Zephyr application developer may choose between three different scheduling
algorithms: dumb, scalable and multiq. These different algorithms have algorithms: simple, scalable and multiq. These different algorithms have
different performance characteristics that vary as the different performance characteristics that vary as the
number of ready threads increases. This benchmark can be used to help number of ready threads increases. This benchmark can be used to help
determine which scheduling algorithm may best suit the developer's application. determine which scheduling algorithm may best suit the developer's application.

View file

@ -239,7 +239,7 @@ int main(void)
freq = timing_freq_get_mhz(); freq = timing_freq_get_mhz();
printk("Time Measurements for %s sched queues\n", printk("Time Measurements for %s sched queues\n",
IS_ENABLED(CONFIG_SCHED_DUMB) ? "dumb" : IS_ENABLED(CONFIG_SCHED_SIMPLE) ? "simple" :
IS_ENABLED(CONFIG_SCHED_SCALABLE) ? "scalable" : "multiq"); IS_ENABLED(CONFIG_SCHED_SCALABLE) ? "scalable" : "multiq");
printk("Timing results: Clock frequency: %u MHz\n", freq); printk("Timing results: Clock frequency: %u MHz\n", freq);

View file

@ -21,9 +21,9 @@ common:
- CONFIG_BENCHMARK_RECORDING=y - CONFIG_BENCHMARK_RECORDING=y
tests: tests:
benchmark.sched_queues.dumb: benchmark.sched_queues.simple:
extra_configs: extra_configs:
- CONFIG_SCHED_DUMB=y - CONFIG_SCHED_SIMPLE=y
benchmark.sched_queues.scalable: benchmark.sched_queues.scalable:
extra_configs: extra_configs:

View file

@ -2,7 +2,7 @@ Wait Queue Measurements
####################### #######################
A Zehpyr application developer may choose between two different wait queue A Zehpyr application developer may choose between two different wait queue
implementations: dumb and scalable. These two queue implementations perform implementations: simple and scalable. These two queue implementations perform
differently under different loads. This benchmark can be used to showcase how differently under different loads. This benchmark can be used to showcase how
the performance of these two implementations vary under varying conditions. the performance of these two implementations vary under varying conditions.

View file

@ -228,7 +228,7 @@ int main(void)
freq = timing_freq_get_mhz(); freq = timing_freq_get_mhz();
printk("Time Measurements for %s wait queues\n", printk("Time Measurements for %s wait queues\n",
IS_ENABLED(CONFIG_WAITQ_DUMB) ? "dumb" : "scalable"); IS_ENABLED(CONFIG_WAITQ_SIMPLE) ? "simple" : "scalable");
printk("Timing results: Clock frequency: %u MHz\n", freq); printk("Timing results: Clock frequency: %u MHz\n", freq);
z_waitq_init(&wait_q); z_waitq_init(&wait_q);

View file

@ -20,9 +20,9 @@ common:
- CONFIG_BENCHMARK_RECORDING=y - CONFIG_BENCHMARK_RECORDING=y
tests: tests:
benchmark.wait_queues.dumb: benchmark.wait_queues.simple:
extra_configs: extra_configs:
- CONFIG_WAITQ_DUMB=y - CONFIG_WAITQ_SIMPLE=y
benchmark.wait_queues.scalable: benchmark.wait_queues.scalable:
extra_configs: extra_configs:

View file

@ -6,7 +6,7 @@ CONFIG_BT=n
# Deadline is not compatible with MULTIQ, so we have to pick something # Deadline is not compatible with MULTIQ, so we have to pick something
# specific instead of using the board-level default. # specific instead of using the board-level default.
CONFIG_SCHED_DUMB=y CONFIG_SCHED_SIMPLE=y
CONFIG_IRQ_OFFLOAD=y CONFIG_IRQ_OFFLOAD=y
CONFIG_IRQ_OFFLOAD_NESTED=n CONFIG_IRQ_OFFLOAD_NESTED=n

View file

@ -1,7 +1,7 @@
CONFIG_ZTEST=y CONFIG_ZTEST=y
CONFIG_IRQ_OFFLOAD=y CONFIG_IRQ_OFFLOAD=y
CONFIG_TEST_USERSPACE=y CONFIG_TEST_USERSPACE=y
CONFIG_SCHED_DUMB=y CONFIG_SCHED_SIMPLE=y
CONFIG_MAX_THREAD_BYTES=6 CONFIG_MAX_THREAD_BYTES=6
CONFIG_MP_MAX_NUM_CPUS=1 CONFIG_MP_MAX_NUM_CPUS=1
CONFIG_ZTEST_FATAL_HOOK=y CONFIG_ZTEST_FATAL_HOOK=y

View file

@ -29,11 +29,11 @@ tests:
extra_args: CONF_FILE=prj_multiq.conf extra_args: CONF_FILE=prj_multiq.conf
extra_configs: extra_configs:
- CONFIG_TIMESLICING=n - CONFIG_TIMESLICING=n
kernel.scheduler.dumb_timeslicing: kernel.scheduler.simple_timeslicing:
extra_args: CONF_FILE=prj_dumb.conf extra_args: CONF_FILE=prj_simple.conf
extra_configs: extra_configs:
- CONFIG_TIMESLICING=y - CONFIG_TIMESLICING=y
kernel.scheduler.dumb_no_timeslicing: kernel.scheduler.simple_no_timeslicing:
extra_args: CONF_FILE=prj_dumb.conf extra_args: CONF_FILE=prj_simple.conf
extra_configs: extra_configs:
- CONFIG_TIMESLICING=n - CONFIG_TIMESLICING=n

View file

@ -4,4 +4,4 @@ CONFIG_SCHED_DEADLINE=y
CONFIG_LOG_DEFAULT_LEVEL=1 CONFIG_LOG_DEFAULT_LEVEL=1
# Test whiteboxes the wait_q and expects it to be a dlist # Test whiteboxes the wait_q and expects it to be a dlist
CONFIG_WAITQ_SCALABLE=n CONFIG_WAITQ_SCALABLE=n
CONFIG_WAITQ_DUMB=y CONFIG_WAITQ_SIMPLE=y