Lines Matching +full:high +full:- +full:bandwidth
2 CFS Bandwidth Control
6 This document only discusses CPU bandwidth control for SCHED_NORMAL.
7 The SCHED_RT case is covered in Documentation/scheduler/sched-rt-group.rst
9 CFS bandwidth control is a CONFIG_FAIR_GROUP_SCHED extension which allows the
10 specification of the maximum CPU bandwidth available to a group or hierarchy.
12 The bandwidth allowed for a group is specified using a quota and period. Within
14 microseconds of CPU time. That quota is assigned to per-cpu run queues in
21 cfs_quota units at each period boundary. As threads consume this bandwidth it
22 is transferred to cpu-local "silos" on a demand basis. The amount transferred
26 -------------
30 Traditional (UP-EDF) bandwidth control is something like:
66 https://lore.kernel.org/lkml/5371BD36-55AE-4F71-B9D7-B86DC32E3D2B@linux.alibaba.com/
69 ----------
75 :ref:`Documentation/admin-guide/cgroup-v2.rst <cgroup-v2-cpu>`.
77 - cpu.cfs_quota_us: run-time replenished within a period (in microseconds)
78 - cpu.cfs_period_us: the length of a period (in microseconds)
79 - cpu.stat: exports throttling statistics [explained further below]
80 - cpu.cfs_burst_us: the maximum accumulated run-time (in microseconds)
85 cpu.cfs_quota_us=-1
88 A value of -1 for cpu.cfs_quota_us indicates that the group does not have any
89 bandwidth restriction in place, such a group is described as an unconstrained
90 bandwidth group. This represents the traditional work-conserving behavior for
94 enact the specified bandwidth limit. The minimum quota allowed for the quota or
96 Additional restrictions exist when bandwidth limits are used in a hierarchical
99 Writing any negative value to cpu.cfs_quota_us will remove the bandwidth limit
103 any unused bandwidth. It makes the traditional bandwidth control behavior for
105 cpu.cfs_quota_us into cpu.cfs_burst_us will enact the cap on unused bandwidth
108 Any updates to a group's bandwidth specification will result in it becoming
112 --------------------
113 For efficiency run-time is transferred between the global pool and CPU local
123 for more fine-grained consumption.
126 ----------
127 A group's bandwidth statistics are exported via 5 fields in cpu.stat.
131 - nr_periods: Number of enforcement intervals that have elapsed.
132 - nr_throttled: Number of times the group has been throttled/limited.
133 - throttled_time: The total time duration (in nanoseconds) for which entities
135 - nr_bursts: Number of periods burst occurs.
136 - burst_time: Cumulative wall-time (in nanoseconds) that any CPUs has used
139 This interface is read-only.
142 ---------------------------
143 The interface enforces that an individual entity's bandwidth is always
144 attainable, that is: max(c_i) <= C. However, over-subscription in the
145 aggregate case is explicitly allowed to enable work-conserving semantics
150 [ Where C is the parent's bandwidth, and c_i its children ]
161 CFS Bandwidth Quota Caveats
162 ---------------------------
169 The fact that cpu-local slices do not expire results in some interesting corner
174 quota as well as the entirety of each cpu-local slice in each period. As a
178 For highly-threaded, non-cpu bound applications this non-expiration nuance
188 small quota limits on high core count machines. It also eliminates the
192 possibility of wastefully expiring quota on cpu-local silos that don't need a
195 The interaction between cpu-bound and non-cpu-bound-interactive applications
197 gave each of these applications half of a cpu-core and they both got scheduled
198 on the same CPU it is theoretically possible that the non-cpu bound application
200 cpu-bound application from fully using its quota by that same amount. In these
201 instances it will be up to the CFS algorithm (see sched-design-CFS.rst) to
207 --------
216 2. Limit a group to 2 CPUs worth of runtime on a multi-CPU machine