1.. _scheduling_v2:
2
3Scheduling
4##########
5
6The kernel's priority-based scheduler allows an application's threads
7to share the CPU.
8
9Concepts
10********
11
12The scheduler determines which thread is allowed to execute
13at any point in time; this thread is known as the **current thread**.
14
15There are various points in time when the scheduler is given an
16opportunity to change the identity of the current thread.  These points
17are called **reschedule points**. Some potential reschedule points are:
18
19- transition of a thread from running state to a suspended or waiting
20  state, for example by :c:func:`k_sem_take` or :c:func:`k_sleep`.
21- transition of a thread to the :ref:`ready state <thread_states>`, for
22  example by :c:func:`k_sem_give` or :c:func:`k_thread_start`
23- return to thread context after processing an interrupt
24- when a running thread invokes :c:func:`k_yield`
25
26A thread **sleeps** when it voluntarily initiates an operation that
27transitions itself to a suspended or waiting state.
28
29Whenever the scheduler changes the identity of the current thread,
30or when execution of the current thread is replaced by an ISR,
31the kernel first saves the current thread's CPU register values.
32These register values get restored when the thread later resumes execution.
33
34
35Scheduling Algorithm
36====================
37
38The kernel's scheduler selects the highest priority ready thread
39to be the current thread. When multiple ready threads of the same priority
40exist, the scheduler chooses the one that has been waiting longest.
41
42A thread's relative priority is primarily determined by its static priority.
43However, when both earliest-deadline-first scheduling is enabled
44(:kconfig:option:`CONFIG_SCHED_DEADLINE`) and a choice of threads have equal
45static priority, then the thread with the earlier deadline is considered
46to have the higher priority. Thus, when earliest-deadline-first scheduling is
47enabled, two threads are only considered to have the same priority when both
48their static priorities and deadlines are equal. The routine
49:c:func:`k_thread_deadline_set` is used to set a thread's deadline.
50
51.. note::
52    Execution of ISRs takes precedence over thread execution,
53    so the execution of the current thread may be replaced by an ISR
54    at any time unless interrupts have been masked. This applies to both
55    cooperative threads and preemptive threads.
56
57
58The kernel can be built with one of several choices for the ready queue
59implementation, offering different choices between code size, constant factor
60runtime overhead and performance scaling when many threads are added.
61
62* Simple linked-list ready queue (:kconfig:option:`CONFIG_SCHED_DUMB`)
63
64  The scheduler ready queue will be implemented as a simple unordered list, with
65  very fast constant time performance for single threads and very low code size.
66  This implementation should be selected on systems with constrained code size
67  that will never see more than a small number (3, maybe) of runnable threads in
68  the queue at any given time.  On most platforms (that are not otherwise using
69  the red/black tree) this results in a savings of ~2k of code size.
70
71* Red/black tree ready queue (:kconfig:option:`CONFIG_SCHED_SCALABLE`)
72
73  The scheduler ready queue will be implemented as a red/black tree.  This has
74  rather slower constant-time insertion and removal overhead, and on most
75  platforms (that are not otherwise using the red/black tree somewhere) requires
76  an extra ~2kb of code. The resulting behavior will scale cleanly and
77  quickly into the many thousands of threads.
78
79  Use this for applications needing many concurrent runnable threads (> 20 or
80  so).  Most applications won't need this ready queue implementation.
81
82* Traditional multi-queue ready queue (:kconfig:option:`CONFIG_SCHED_MULTIQ`)
83
84  When selected, the scheduler ready queue will be implemented as the
85  classic/textbook array of lists, one per priority.
86
87  This corresponds to the scheduler algorithm used in Zephyr versions prior to
88  1.12.
89
90  It incurs only a tiny code size overhead vs. the "dumb" scheduler and runs in
91  O(1) time in almost all circumstances with very low constant factor.  But it
92  requires a fairly large RAM budget to store those list heads, and the limited
93  features make it incompatible with features like deadline scheduling that
94  need to sort threads more finely, and SMP affinity which need to traverse the
95  list of threads.
96
97  Typical applications with small numbers of runnable threads probably want the
98  DUMB scheduler.
99
100
101The wait_q abstraction used in IPC primitives to pend threads for later wakeup
102shares the same backend data structure choices as the scheduler, and can use
103the same options.
104
105* Scalable wait_q implementation (:kconfig:option:`CONFIG_WAITQ_SCALABLE`)
106
107  When selected, the wait_q will be implemented with a balanced tree.  Choose
108  this if you expect to have many threads waiting on individual primitives.
109  There is a ~2kb code size increase over :kconfig:option:`CONFIG_WAITQ_DUMB` (which may
110  be shared with :kconfig:option:`CONFIG_SCHED_SCALABLE`) if the red/black tree is not
111  used elsewhere in the application, and pend/unpend operations on "small"
112  queues will be somewhat slower (though this is not generally a performance
113  path).
114
115* Simple linked-list wait_q (:kconfig:option:`CONFIG_WAITQ_DUMB`)
116
117  When selected, the wait_q will be implemented with a doubly-linked list.
118  Choose this if you expect to have only a few threads blocked on any single
119  IPC primitive.
120
121Cooperative Time Slicing
122========================
123
124Once a cooperative thread becomes the current thread, it remains
125the current thread until it performs an action that makes it unready.
126Consequently, if a cooperative thread performs lengthy computations,
127it may cause an unacceptable delay in the scheduling of other threads,
128including those of higher priority and equal priority.
129
130
131  .. image:: cooperative.svg
132     :align: center
133
134To overcome such problems, a cooperative thread can voluntarily relinquish
135the CPU from time to time to permit other threads to execute.
136A thread can relinquish the CPU in two ways:
137
138* Calling :c:func:`k_yield` puts the thread at the back of the scheduler's
139  prioritized list of ready threads, and then invokes the scheduler.
140  All ready threads whose priority is higher or equal to that of the
141  yielding thread are then allowed to execute before the yielding thread is
142  rescheduled. If no such ready threads exist, the scheduler immediately
143  reschedules the yielding thread without context switching.
144
145* Calling :c:func:`k_sleep` makes the thread unready for a specified
146  time period. Ready threads of *all* priorities are then allowed to execute;
147  however, there is no guarantee that threads whose priority is lower
148  than that of the sleeping thread will actually be scheduled before
149  the sleeping thread becomes ready once again.
150
151Preemptive Time Slicing
152=======================
153
154Once a preemptive thread becomes the current thread, it remains
155the current thread until a higher priority thread becomes ready,
156or until the thread performs an action that makes it unready.
157Consequently, if a preemptive thread performs lengthy computations,
158it may cause an unacceptable delay in the scheduling of other threads,
159including those of equal priority.
160
161
162  .. image:: preemptive.svg
163     :align: center
164
165To overcome such problems, a preemptive thread can perform cooperative
166time slicing (as described above), or the scheduler's time slicing capability
167can be used to allow other threads of the same priority to execute.
168
169.. image:: timeslicing.svg
170   :align: center
171
172The scheduler divides time into a series of **time slices**, where slices
173are measured in system clock ticks. The time slice size is configurable,
174but this size can be changed while the application is running.
175
176At the end of every time slice, the scheduler checks to see if the current
177thread is preemptible and, if so, implicitly invokes :c:func:`k_yield`
178on behalf of the thread. This gives other ready threads of the same priority
179the opportunity to execute before the current thread is scheduled again.
180If no threads of equal priority are ready, the current thread remains
181the current thread.
182
183Threads with a priority higher than specified limit are exempt from preemptive
184time slicing, and are never preempted by a thread of equal priority.
185This allows an application to use preemptive time slicing
186only when dealing with lower priority threads that are less time-sensitive.
187
188.. note::
189   The kernel's time slicing algorithm does *not* ensure that a set
190   of equal-priority threads receive an equitable amount of CPU time,
191   since it does not measure the amount of time a thread actually gets to
192   execute. However, the algorithm *does* ensure that a thread never executes
193   for longer than a single time slice without being required to yield.
194
195Scheduler Locking
196=================
197
198A preemptible thread that does not wish to be preempted while performing
199a critical operation can instruct the scheduler to temporarily treat it
200as a cooperative thread by calling :c:func:`k_sched_lock`. This prevents
201other threads from interfering while the critical operation is being performed.
202
203Once the critical operation is complete the preemptible thread must call
204:c:func:`k_sched_unlock` to restore its normal, preemptible status.
205
206If a thread calls :c:func:`k_sched_lock` and subsequently performs an
207action that makes it unready, the scheduler will switch the locking thread out
208and allow other threads to execute. When the locking thread again
209becomes the current thread, its non-preemptible status is maintained.
210
211.. note::
212    Locking out the scheduler is a more efficient way for a preemptible thread
213    to prevent preemption than changing its priority level to a negative value.
214
215
216.. _thread_sleeping:
217
218Thread Sleeping
219===============
220
221A thread can call :c:func:`k_sleep` to delay its processing
222for a specified time period. During the time the thread is sleeping
223the CPU is relinquished to allow other ready threads to execute.
224Once the specified delay has elapsed the thread becomes ready
225and is eligible to be scheduled once again.
226
227A sleeping thread can be woken up prematurely by another thread using
228:c:func:`k_wakeup`. This technique can sometimes be used
229to permit the secondary thread to signal the sleeping thread
230that something has occurred *without* requiring the threads
231to define a kernel synchronization object, such as a semaphore.
232Waking up a thread that is not sleeping is allowed, but has no effect.
233
234.. _busy_waiting:
235
236Busy Waiting
237============
238
239A thread can call :c:func:`k_busy_wait` to perform a ``busy wait``
240that delays its processing for a specified time period
241*without* relinquishing the CPU to another ready thread.
242
243A busy wait is typically used instead of thread sleeping
244when the required delay is too short to warrant having the scheduler
245context switch from the current thread to another thread and then back again.
246
247Suggested Uses
248**************
249
250Use cooperative threads for device drivers and other performance-critical work.
251
252Use cooperative threads to implement mutually exclusion without the need
253for a kernel object, such as a mutex.
254
255Use preemptive threads to give priority to time-sensitive processing
256over less time-sensitive processing.
257
258.. _cpu_idle:
259
260CPU Idling
261##########
262
263Although normally reserved for the idle thread, in certain special
264applications, a thread might want to make the CPU idle.
265
266.. contents::
267    :local:
268    :depth: 2
269
270Concepts
271********
272
273Making the CPU idle causes the kernel to pause all operations until an event,
274normally an interrupt, wakes up the CPU. In a regular system, the idle thread
275is responsible for this. However, in some constrained systems, it is possible
276that another thread takes this duty.
277
278Implementation
279**************
280
281Making the CPU idle
282===================
283
284Making the CPU idle is simple: call the k_cpu_idle() API. The CPU will stop
285executing instructions until an event occurs. Most likely, the function will
286be called within a loop. Note that in certain architectures, upon return,
287k_cpu_idle() unconditionally unmasks interrupts.
288
289.. code-block:: c
290
291    static k_sem my_sem;
292
293    void my_isr(void *unused)
294    {
295        k_sem_give(&my_sem);
296    }
297
298    int main(void)
299    {
300        k_sem_init(&my_sem, 0, 1);
301
302        /* wait for semaphore from ISR, then do related work */
303
304        for (;;) {
305
306            /* wait for ISR to trigger work to perform */
307            if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {
308
309                /* ... do processing */
310
311            }
312
313            /* put CPU to sleep to save power */
314            k_cpu_idle();
315        }
316    }
317
318Making the CPU idle in an atomic fashion
319========================================
320
321It is possible that there is a need to do some work atomically before making
322the CPU idle. In such a case, k_cpu_atomic_idle() should be used instead.
323
324In fact, there is a race condition in the previous example: the interrupt could
325occur between the time the semaphore is taken, finding out it is not available
326and making the CPU idle again. In some systems, this can cause the CPU to idle
327until *another* interrupt occurs, which might be *never*, thus hanging the
328system completely. To prevent this, k_cpu_atomic_idle() should have been used,
329like in this example.
330
331.. code-block:: c
332
333    static k_sem my_sem;
334
335    void my_isr(void *unused)
336    {
337        k_sem_give(&my_sem);
338    }
339
340    int main(void)
341    {
342        k_sem_init(&my_sem, 0, 1);
343
344        for (;;) {
345
346            unsigned int key = irq_lock();
347
348            /*
349             * Wait for semaphore from ISR; if acquired, do related work, then
350             * go to next loop iteration (the semaphore might have been given
351             * again); else, make the CPU idle.
352             */
353
354            if (k_sem_take(&my_sem, K_NO_WAIT) == 0) {
355
356                irq_unlock(key);
357
358                /* ... do processing */
359
360
361            } else {
362                /* put CPU to sleep to save power */
363                k_cpu_atomic_idle(key);
364            }
365        }
366    }
367
368
369Suggested Uses
370**************
371
372Use k_cpu_atomic_idle() when a thread has to do some real work in addition to
373idling the CPU to wait for an event. See example above.
374
375Use k_cpu_idle() only when a thread is only responsible for idling the CPU,
376i.e. not doing any real work, like in this example below.
377
378.. code-block:: c
379
380    int main(void)
381    {
382        /* ... do some system/application initialization */
383
384
385        /* thread is only used for CPU idling from this point on */
386        for (;;) {
387            k_cpu_idle();
388        }
389    }
390
391.. note::
392     **Do not use these APIs unless absolutely necessary.** In a normal system,
393     the idle thread takes care of power management, including CPU idling.
394
395API Reference
396*************
397
398.. doxygengroup:: cpu_idle_apis
399