1The KVM halt polling system 2=========================== 3 4The KVM halt polling system provides a feature within KVM whereby the latency 5of a guest can, under some circumstances, be reduced by polling in the host 6for some time period after the guest has elected to no longer run by cedeing. 7That is, when a guest vcpu has ceded, or in the case of powerpc when all of the 8vcpus of a single vcore have ceded, the host kernel polls for wakeup conditions 9before giving up the cpu to the scheduler in order to let something else run. 10 11Polling provides a latency advantage in cases where the guest can be run again 12very quickly by at least saving us a trip through the scheduler, normally on 13the order of a few micro-seconds, although performance benefits are workload 14dependant. In the event that no wakeup source arrives during the polling 15interval or some other task on the runqueue is runnable the scheduler is 16invoked. Thus halt polling is especially useful on workloads with very short 17wakeup periods where the time spent halt polling is minimised and the time 18savings of not invoking the scheduler are distinguishable. 19 20The generic halt polling code is implemented in: 21 22 virt/kvm/kvm_main.c: kvm_vcpu_block() 23 24The powerpc kvm-hv specific case is implemented in: 25 26 arch/powerpc/kvm/book3s_hv.c: kvmppc_vcore_blocked() 27 28Halt Polling Interval 29===================== 30 31The maximum time for which to poll before invoking the scheduler, referred to 32as the halt polling interval, is increased and decreased based on the perceived 33effectiveness of the polling in an attempt to limit pointless polling. 34This value is stored in either the vcpu struct: 35 36 kvm_vcpu->halt_poll_ns 37 38or in the case of powerpc kvm-hv, in the vcore struct: 39 40 kvmppc_vcore->halt_poll_ns 41 42Thus this is a per vcpu (or vcore) value. 43 44During polling if a wakeup source is received within the halt polling interval, 45the interval is left unchanged. In the event that a wakeup source isn't 46received during the polling interval (and thus schedule is invoked) there are 47two options, either the polling interval and total block time[0] were less than 48the global max polling interval (see module params below), or the total block 49time was greater than the global max polling interval. 50 51In the event that both the polling interval and total block time were less than 52the global max polling interval then the polling interval can be increased in 53the hope that next time during the longer polling interval the wake up source 54will be received while the host is polling and the latency benefits will be 55received. The polling interval is grown in the function grow_halt_poll_ns() and 56is multiplied by the module parameter halt_poll_ns_grow. 57 58In the event that the total block time was greater than the global max polling 59interval then the host will never poll for long enough (limited by the global 60max) to wakeup during the polling interval so it may as well be shrunk in order 61to avoid pointless polling. The polling interval is shrunk in the function 62shrink_halt_poll_ns() and is divided by the module parameter 63halt_poll_ns_shrink, or set to 0 iff halt_poll_ns_shrink == 0. 64 65It is worth noting that this adjustment process attempts to hone in on some 66steady state polling interval but will only really do a good job for wakeups 67which come at an approximately constant rate, otherwise there will be constant 68adjustment of the polling interval. 69 70[0] total block time: the time between when the halt polling function is 71 invoked and a wakeup source received (irrespective of 72 whether the scheduler is invoked within that function). 73 74Module Parameters 75================= 76 77The kvm module has 3 tuneable module parameters to adjust the global max 78polling interval as well as the rate at which the polling interval is grown and 79shrunk. These variables are defined in include/linux/kvm_host.h and as module 80parameters in virt/kvm/kvm_main.c, or arch/powerpc/kvm/book3s_hv.c in the 81powerpc kvm-hv case. 82 83Module Parameter | Description | Default Value 84-------------------------------------------------------------------------------- 85halt_poll_ns | The global max polling interval | KVM_HALT_POLL_NS_DEFAULT 86 | which defines the ceiling value | 87 | of the polling interval for | (per arch value) 88 | each vcpu. | 89-------------------------------------------------------------------------------- 90halt_poll_ns_grow | The value by which the halt | 2 91 | polling interval is multiplied | 92 | in the grow_halt_poll_ns() | 93 | function. | 94-------------------------------------------------------------------------------- 95halt_poll_ns_shrink | The value by which the halt | 0 96 | polling interval is divided in | 97 | the shrink_halt_poll_ns() | 98 | function. | 99-------------------------------------------------------------------------------- 100 101These module parameters can be set from the debugfs files in: 102 103 /sys/module/kvm/parameters/ 104 105Note: that these module parameters are system wide values and are not able to 106 be tuned on a per vm basis. 107 108Further Notes 109============= 110 111- Care should be taken when setting the halt_poll_ns module parameter as a 112large value has the potential to drive the cpu usage to 100% on a machine which 113would be almost entirely idle otherwise. This is because even if a guest has 114wakeups during which very little work is done and which are quite far apart, if 115the period is shorter than the global max polling interval (halt_poll_ns) then 116the host will always poll for the entire block time and thus cpu utilisation 117will go to 100%. 118 119- Halt polling essentially presents a trade off between power usage and latency 120and the module parameters should be used to tune the affinity for this. Idle 121cpu time is essentially converted to host kernel time with the aim of decreasing 122latency when entering the guest. 123 124- Halt polling will only be conducted by the host when no other tasks are 125runnable on that cpu, otherwise the polling will cease immediately and 126schedule will be invoked to allow that other task to run. Thus this doesn't 127allow a guest to denial of service the cpu. 128