Lines Matching +full:has +full:- +full:lock
26 non-Zephyr code).
36 At the lowest level, however, Zephyr code has often used the
50 also atomically validates that a shared lock variable has been
52 needed to wait for the other CPU to exit the lock. The default Zephyr
54 on top of the pre-existing :c:struct:`atomic_` layer (itself usually
59 earlier API was naturally recursive: the lock was global, so it was
60 legal to acquire a nested lock inside of a critical section.
64 used recursively. Code that holds a specific lock must not try to
65 re-acquire it or it will deadlock (it is perfectly legal to nest
70 (the atomic lock variable) is unnecessary and elided. Except for the
71 recursive semantics above, spinlocks in single-CPU contexts produce
73 Zephyr core kernel has now been ported to use spinlocks exclusively.
84 can hold the lock at any time, that it is released on context switch,
85 and that it is re-acquired when necessary to restore the lock state
89 The overhead involved in this process has measurable performance
93 IRQ lock is global, means that code expecting to be run in an SMP
109 :c:func:`k_thread_cpu_mask_enable` will re-enable execution. There are also
113 suspended, otherwise an ``-EINVAL`` will be returned.
116 involved in doing the per-CPU mask test requires that the list be
117 traversed in full. The kernel does not keep a per-CPU run queue.
146 many architectures the timer is a per-CPU device and needs to be
157 :figclass: align-center
175 that system idle be implemented using a low-power mode with as many
178 thread becomes runnable, the idle CPU has no way to "wake up" to
179 handle the newly-runnable load.
191 APIs will evolve over time to encompass more functionality (e.g. cross-CPU
192 calls), and that the scheduler-specific calls here will be implemented in
204 "DEAD" or for it to re-enter the queue (in which case we terminate it
223 involve severe lock contention) for new threads. The expectation is
242 may be more than one valid set--one of which may be optimal.
244 To better illustrate the distinction, consider a 2-CPU system with ready
255 kernel to generate cascading IPIs until the kernel has selected a valid set of
258 There are three types of costs/penalties associated with the IPI cascades--and
270 In general, Zephyr kernel code is SMP-agnostic and, like application
275 Per-CPU data
281 running concurrently. Likewise a kernel-provided interrupt stack
286 within the :c:struct:`_kernel` struct, which has a ``cpus[]`` array indexed by ID.
294 implemented using a CPU-provided register or addressing mode that can
296 make it available to any kernel-mode code.
306 Switch-based context switching
309 The traditional Zephyr context switch primitive has been :c:func:`z_swap`.
311 switch to. The expectation has always been that the scheduler has
321 with the swap call, and as we don't want per-architecture assembly
323 somewhat lower-level context switch primitives for SMP systems:
333 Similarly, on interrupt exit, switch-based architectures are expected
342 the caller-saved registers on the current thread's stack when interrupted
343 in order to minimize interrupt latency, and preserve the callee-saved
351 "switch_handle" field after its context has fully been saved.