Lines Matching full:all

36  * WFI state until all cpus are ready to enter a coupled state, at
37 * which point the coupled state function will be called on all
40 * Once all cpus are ready to enter idle, they are woken by an smp
43 * final pass is needed to guarantee that all cpus will call the
56 * and only read after all the cpus are ready for the coupled idle
68 * Set struct cpuidle_device.coupled_cpus to the mask of all
69 * coupled cpus, usually the same as cpu_possible_mask if all cpus
81 * called on all cpus at approximately the same time. The driver
82 * should ensure that the cpus all abort together if any cpu tries
131 * cpuidle_coupled_parallel_barrier - synchronize all online coupled cpus
135 * No caller to this function will return from this function until all online
227 int all; in cpuidle_coupled_set_not_ready() local
230 all = coupled->online_count | (coupled->online_count << WAITING_BITS); in cpuidle_coupled_set_not_ready()
232 -MAX_WAITING_CPUS, all); in cpuidle_coupled_set_not_ready()
241 * Returns true if all of the cpus in a coupled set are out of the ready loop.
250 * cpuidle_coupled_cpus_ready - check if all cpus in a coupled set are ready
253 * Returns true if all cpus coupled to this target state are in the ready loop
262 * cpuidle_coupled_cpus_waiting - check if all cpus in a coupled set are waiting
265 * Returns true if all cpus coupled to this target state are in the wait loop
277 * Returns true if all of the cpus in a coupled set are out of the waiting loop.
290 * Returns the deepest idle state that all coupled cpus can enter
340 * cpuidle_coupled_poke_others - wake up all other cpus that may be waiting
344 * Calls cpuidle_coupled_poke on all other online cpus.
461 * all the other cpus to call this function. Once all coupled cpus are idle,
462 * the second stage will start. Each coupled cpu will spin until all cpus have
499 * all the other cpus out of their waiting state so they can in cpuidle_enter_state_coupled()
511 * Wait for all coupled cpus to be idle, using the deepest state in cpuidle_enter_state_coupled()
551 * All coupled cpus are probably idle. There is a small chance that in cpuidle_enter_state_coupled()
553 * and spin until all coupled cpus have incremented the counter. Once a in cpuidle_enter_state_coupled()
555 * spin until either all cpus have incremented the ready counter, or in cpuidle_enter_state_coupled()
570 * Make sure read of all cpus ready is done before reading pending pokes in cpuidle_enter_state_coupled()
576 * cpu saw that all cpus were waiting. The cpu that reentered idle will in cpuidle_enter_state_coupled()
582 * coupled idle state of all cpus and retry. in cpuidle_enter_state_coupled()
586 /* Wait for all cpus to see the pending pokes */ in cpuidle_enter_state_coupled()
591 /* all cpus have acked the coupled state */ in cpuidle_enter_state_coupled()
603 * exiting the idle enter function and decrementing ready_count. All in cpuidle_enter_state_coupled()
606 * all other cpus will loop back into the safe idle state instead of in cpuidle_enter_state_coupled()
616 * Wait until all coupled cpus have exited idle. There is no risk that in cpuidle_enter_state_coupled()
713 /* Force all cpus out of the waiting loop. */ in cpuidle_coupled_prevent_idle()