Lines Matching +full:dead +full:- +full:time
20 number of physical CPUs available is visible at build time as
26 non-Zephyr code).
54 on top of the pre-existing :c:struct:`atomic_` layer (itself usually
65 re-acquire it or it will deadlock (it is perfectly legal to nest
71 recursive semantics above, spinlocks in single-CPU contexts produce
84 can hold the lock at any time, that it is released on context switch,
85 and that it is re-acquired when necessary to restore the lock state
99 It is often desirable for real time applications to deliberately
109 :c:func:`k_thread_cpu_mask_enable` will re-enable execution. There are also
113 suspended, otherwise an ``-EINVAL`` will be returned.
116 involved in doing the per-CPU mask test requires that the list be
117 traversed in full. The kernel does not keep a per-CPU run queue.
146 many architectures the timer is a per-CPU device and needs to be
157 :figclass: align-center
175 that system idle be implemented using a low-power mode with as many
179 handle the newly-runnable load.
191 APIs will evolve over time to encompass more functionality (e.g. cross-CPU
192 calls), and that the scheduler-specific calls here will be implemented in
204 "DEAD" or for it to re-enter the queue (in which case we terminate it
210 be a much longer time!
242 may be more than one valid set--one of which may be optimal.
244 To better illustrate the distinction, consider a 2-CPU system with ready
258 There are three types of costs/penalties associated with the IPI cascades--and
270 In general, Zephyr kernel code is SMP-agnostic and, like application
275 Per-CPU data
281 running concurrently. Likewise a kernel-provided interrupt stack
294 implemented using a CPU-provided register or addressing mode that can
296 make it available to any kernel-mode code.
306 Switch-based context switching
321 with the swap call, and as we don't want per-architecture assembly
323 somewhat lower-level context switch primitives for SMP systems:
333 Similarly, on interrupt exit, switch-based architectures are expected
342 the caller-saved registers on the current thread's stack when interrupted
343 in order to minimize interrupt latency, and preserve the callee-saved