Lines Matching full:idle
3 * coupled.c - helper functions to enter the same idle state on multiple cpus
40 * Once all cpus are ready to enter idle, they are woken by an smp
42 * cpus will find work to do, and choose not to enter idle. A
47 * cpu exits idle, the other cpus will decrement their counter and
50 * requested_state stores the deepest coupled idle state each cpu
56 * and only read after all the cpus are ready for the coupled idle
62 * the waiting loop, in the ready loop, or in the coupled idle state.
64 * or in the coupled idle state.
88 * struct cpuidle_coupled - data for set of cpus that share a coupled idle state
94 * @prevent: flag to prevent coupled idle while a cpu is hotplugging
143 * Must only be called from within a coupled idle state handler
218 * down from the number of online cpus without going through the coupled idle
286 * cpuidle_coupled_get_state - determine the deepest idle state
290 * Returns the deepest idle state that all coupled cpus can enter
323 * Ensures that the target cpu exits it's waiting idle state (if it is in it)
324 * and will see updates to waiting_count before it re-enters it's waiting idle
328 * either has or will soon have a pending IPI that will wake it out of idle,
329 * or it is currently processing the IPI and is not in idle.
362 * Updates the requested idle state for the specified cpuidle device.
382 * Removes the requested idle state for the specified cpuidle device.
405 * this cpu as waiting just before it exits idle.
422 * the interrupt didn't schedule work that should take the cpu out of idle.
461 * all the other cpus to call this function. Once all coupled cpus are idle,
466 * interrupts while preparing for idle, and it will always return with
511 * Wait for all coupled cpus to be idle, using the deepest state in cpuidle_enter_state_coupled()
551 * All coupled cpus are probably idle. There is a small chance that in cpuidle_enter_state_coupled()
554 * cpu has incremented the ready counter, it cannot abort idle and must in cpuidle_enter_state_coupled()
556 * another cpu leaves idle and decrements the waiting counter. in cpuidle_enter_state_coupled()
561 /* Check if any other cpus bailed out of idle. */ in cpuidle_enter_state_coupled()
575 * There is a small chance that a cpu left and reentered idle after this in cpuidle_enter_state_coupled()
576 * cpu saw that all cpus were waiting. The cpu that reentered idle will in cpuidle_enter_state_coupled()
579 * controller when entering the deep idle state. It's not possible to in cpuidle_enter_state_coupled()
582 * coupled idle state of all cpus and retry. in cpuidle_enter_state_coupled()
602 * that brings it out of idle will process that interrupt before in cpuidle_enter_state_coupled()
603 * exiting the idle enter function and decrementing ready_count. All in cpuidle_enter_state_coupled()
606 * all other cpus will loop back into the safe idle state instead of in cpuidle_enter_state_coupled()
616 * Wait until all coupled cpus have exited idle. There is no risk that in cpuidle_enter_state_coupled()
637 * Called from cpuidle_register_device to handle coupled idle init. Finds the
686 * Called from cpuidle_unregister_device to tear down coupled idle. Removes the
687 * cpu from the coupled idle set, and frees the cpuidle_coupled_info struct if
707 * cpu_online_mask doesn't change while cpus are coordinating coupled idle.
726 * cpu_online_mask doesn't change while cpus are coordinating coupled idle.