Lines Matching full:that

33 CPU idle time management operates on CPUs as seen by the *CPU scheduler* (that
35 work in the system). In its view, CPUs are *logical* units. That is, they need
38 entity which appears to be fetching instructions that belong to one sequence
43 program) at a time, it is a CPU. In that case, if the hardware is asked to
44 enter an idle state, that applies to the processor as a whole.
51 time. The entire cores are CPUs in that case and if the hardware is asked to
52 enter an idle state, that applies to the core that asked for it in the first
54 that the core belongs to (in fact, it may apply to an entire hierarchy of larger
57 remaining core asks the processor to enter an idle state, that may trigger it
59 other cores in that unit.
62 program in the same time frame (that is, each core may be able to fetch
64 frame, but not necessarily entirely in parallel with each other). In that case
67 (or hyper-threads specifically on Intel hardware), that each can follow one
70 by one of them, the hardware thread (or CPU) that asked for it is stopped, but
72 core also have asked the processor to enter an idle state. In that situation,
86 running that code, and some context information that needs to be loaded into the
92 there is a CPU available for that (for example, they are not waiting for any
103 in Linux idle CPUs run the code of the "idle" task called *the idle loop*. That
118 calls into a code module referred to as the *governor* that belongs to the CPU
125 conditions at hand. For this purpose, idle states that the hardware can be
128 (linear) array. That array has to be prepared and supplied by the ``CPUIdle``
131 hardware and to work with any platforms that the Linux kernel can run on.
133 Each idle state present in that array is characterized by two parameters to be
139 corresponds to the power drawn by the processor in that state.] The exit
142 wakeup from that state. Note that in general the exit latency also must cover
147 There are two types of information that can influence the governor's decisions.
148 First of all, the governor knows the time until the closest timer event. That
150 when they will trigger, and it is the maximum time the hardware that the given
154 when that may happen. The governor can only see how much time the CPU actually
155 was idle after it has been woken up (that time will be referred to as the *idle
156 duration* from now on) and it can use that information somehow along with the
158 governor uses that information depends on what algorithm is implemented by it
159 and that is the primary reason for having more than one governor in the
167 been passed to the kernel, but that is not safe in general, so it should not be
168 done on production systems (that may change in the future, though). The name of
176 matching driver. For example, there are two drivers that can work with the
178 hardcoded idle states information and the other able to read that information
193 The scheduler tick is a timer that triggers periodically in order to implement
199 prioritization and so on and when that time slice is used up, the CPU should be
202 is there to make the switch happen regardless. That is not the only role of the
210 the tick period length. Moreover, in that case the idle duration of any CPU
220 the scheduler tick entirely on idle CPUs in principle, even though that may not
227 reprogrammed in that case. Second, if the governor is expecting a non-timer
229 be harmful. Namely, in that case the governor will select an idle state with
230 the target residency within the time until the expected wakeup, so that state is
232 state then, as that would contradict its own expectation of a wakeup in short
241 so that it does not wake up the CPU too early.
246 to leave it as is and the governor needs to take that into account.
249 loop altogether. That can be done through the build-time configuration of it
255 The systems that run kernels configured to allow the scheduler tick to be
270 Namely, when invoked to select an idle state for a CPU (i.e. an idle state that
275 that the scheduler tick will be stopped. That time, referred to as the *sleep
282 for some I/O operations to complete and the other one is used when that is not
283 the case. Each array contains several correction factor values that correspond
284 to different sleep length ranges organized so that each range represented in the
291 The sleep length is multiplied by the correction factor for the range that it
299 that 6 times the standard deviation), the average is regarded as the "typical
311 workloads. It uses the observation that if the exit latency of the selected
313 in that state probably will be very short and the amount of energy to save by
315 overhead related to entering that state and exiting it. Thus selecting a
318 additionally is divided by a value depending on the number of tasks that
320 complete. The result of that division is compared with the latency limit coming
329 idle duration, but still below it, and exit latency that does not exceed the
333 if it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That
338 that time, the governor may need to select a shallower state with a suitable
350 given conditions. However, it applies a different approach to that problem.
354 and use that information to pick up the idle state that is most likely to
356 that were running on the given CPU in the past and are waiting on some I/O
357 operations to complete now at all (there is no guarantee that they will run on
361 tick excluded) for that purpose.
365 assumption that the scheduler tick will be stopped (that also is the upper bound
366 on the time until the next CPU wakeup). That value is then used to preselect an
370 The ``hits`` and ``misses`` metrics measure the likelihood that a given idle
375 greater than the sleep length (that is, when the idle state corresponding to
380 (that is, it is increased when the given idle state "matches" both the sleep
385 The ``early_hits`` metric measures the likelihood that a given idle state will
395 to the sleep length. Then, the ``hits`` and ``misses`` metrics of that idle
397 greater (which means that that idle state is likely to "match" the observed idle
410 the target residency of the preselected idle state, that idle state becomes the
414 one and finds the deepest of them with the target residency within that average.
415 That idle state is then taken as the final candidate to ask for.
418 it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That
424 than that time, a shallower state with a suitable target residency may need to
439 the hierarchy. In that case, the `target residency and exit latency parameters
445 a "module" and suppose that asking the hardware to enter a specific idle state
450 "module" level, but there is no guarantee that this is going to happen (the core
451 asking for idle state "X" may just end up in that state by itself instead).
454 the module (including the time needed to enter it), because that is the minimum
456 that state. Analogously, the exit latency parameter of that object must cover
458 because that is the maximum delay between a wakeup signal and the time the CPU
459 will start to execute the first new instruction (assuming that both cores in the
469 that the processor hardware finally goes into must always follow the parameters
471 latency of that idle state must not exceed the exit latency parameter of the
477 order to ask the hardware to enter that state. Also, for each
480 statistics of the given idle state. That information is exposed by the kernel
485 CPU at the initialization time. That directory contains a set of subdirectories
531 between them is that the name is expected to be more concise, while the
536 given idle state is disabled for this particular CPU, which means that the
538 driver will never ask the hardware to enter it for that CPU as a result.
541 never be asked for by any of them. [Note that, due to the way the ``ladder``
542 governor is implemented, disabling an idle state prevents that governor from
550 unless that state was disabled globally in the driver (in which case it cannot
557 available) and if it contains a nonzero number, that number may not be very
605 framework maintains a list of requests that have been made so far in each
612 "open" operation represents that request. If that file descriptor is then
616 the entire list of requests and that effective value will be set as a new
622 file controls the PM QoS request associated with that file descriptor, but it
627 with that file descriptor to be removed from the ``PM_QOS_CPU_DMA_LATENCY``
628 class priority list and destroyed. If that happens, the priority list mechanism
630 and that value will become the new real constraint.
636 process does that. In other words, this PM QoS request is shared by the entire
644 (there may be other requests coming from kernel code in that list).
649 latency of the idle states they can select for that CPU. They should never
650 select any idle states with exit latency beyond that limit.
665 support code that is expected to provide a default mechanism for this purpose.
666 That default mechanism usually is the least common denominator for all of the
673 the name of an available governor (e.g. ``cpuidle.governor=menu``) and that
675 the ``menu`` governor to be used on the systems that use the ``ladder`` governor
687 architecture support code to deal with idle CPUs. How it does that depends on
694 that using ``idle=poll`` is somewhat drastic in many cases, as preventing idle
697 P-states (see |cpufreq|) that require any number of CPUs in a package to be
709 drivers that can be passed to them via the kernel command line. Specifically,
715 idle states deeper than idle state ``<n>``. In that case, they will never ask
721 Also, the ``acpi_idle`` driver is part of the ``processor`` kernel module that