/Linux-v5.10/Documentation/admin-guide/cgroup-v1/ |
D | freezer-subsystem.rst | 6 and stop sets of tasks in order to schedule the resources of a machine 9 whole. The cgroup freezer uses cgroups to describe the set of tasks to 11 a means to start and stop the tasks composing the job. 14 of tasks. The freezer allows the checkpoint code to obtain a consistent 15 image of the tasks by attempting to force the tasks in a cgroup into a 16 quiescent state. Once the tasks are quiescent another task can 18 quiesced tasks. Checkpointed tasks can be restarted later should a 19 recoverable error occur. This also allows the checkpointed tasks to be 21 to another node and restarting the tasks there. 24 and resuming tasks in userspace. Both of these signals are observable [all …]
|
D | cpuacct.rst | 5 The CPU accounting controller is used to group tasks using cgroups and 6 account the CPU usage of these groups of tasks. 9 group accumulates the CPU usage of all of its child groups and the tasks 17 visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in 18 the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup. 20 by this group which is essentially the CPU time obtained by all the tasks 27 # echo $$ > g1/tasks 38 user: Time spent by tasks of the cgroup in user mode. 39 system: Time spent by tasks of the cgroup in kernel mode.
|
D | cgroups.rst | 45 tasks, and all their future children, into hierarchical groups with 50 A *cgroup* associates a set of tasks with a set of parameters for one 54 facilities provided by cgroups to treat groups of tasks in 67 cgroups. Each hierarchy is a partition of all tasks in the system. 81 tasks in each cgroup. 100 the division of tasks into cgroups is distinctly different for 102 hierarchy to be a natural division of tasks, without having to handle 103 complex combinations of tasks that would be present if several 114 tasks etc. The resource planning for this server could be along the 123 In addition (system tasks) are attached to topcpuset (so [all …]
|
D | cpusets.rst | 44 Nodes to a set of tasks. In this document "Memory Node" refers to 47 Cpusets constrain the CPU and Memory placement of tasks to only 82 the available CPU and Memory resources amongst the requesting tasks. 139 - You can list all the tasks (by pid) attached to any cpuset. 148 - in sched.c migrate_live_tasks(), to keep migrating tasks within 184 - cpuset.sched_relax_domain_level: the searching range when migrating tasks 192 CPUs and Memory Nodes, and attached tasks, are modified by writing 200 on a system into related sets of tasks such that each set is constrained 206 the detailed placement done on individual tasks and memory regions 264 of the rate that the tasks in a cpuset are attempting to free up in [all …]
|
/Linux-v5.10/Documentation/scheduler/ |
D | sched-deadline.rst | 12 3. Scheduling Real-Time Tasks 22 5. Tasks CPU affinity 43 that makes it possible to isolate the behavior of tasks between each other. 53 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive 58 consistent with the guarantee (using the CBS[2,3] algorithm). Tasks are then 65 Summing up, the CBS[2,3] algorithm assigns scheduling deadlines to tasks so 67 interference between different tasks (bandwidth isolation), while the EDF[1] 69 to be executed next. Thanks to this feature, tasks that do not strictly comply 74 tasks in the following way: 128 Bandwidth reclaiming for deadline tasks is based on the GRUB (Greedy [all …]
|
D | sched-design-CFS.rst | 19 1/nr_running speed. For example: if there are 2 tasks running, then it runs 26 is its actual runtime normalized to the total number of running tasks. 37 [ small detail: on "ideal" hardware, at any time all tasks would have the same 38 p->se.vruntime value --- i.e., tasks would execute simultaneously and no task 44 up CPU time between runnable tasks as close to "ideal multitasking hardware" as 62 increasing value tracking the smallest vruntime among all tasks in the 67 The total number of running tasks in the runqueue is accounted through the 68 rq->cfs.load value, which is the sum of the weights of the tasks queued on the 71 CFS maintains a time-ordered rbtree, where all runnable tasks are sorted by the 73 As the system progresses forwards, the executed tasks are put into the tree [all …]
|
D | sched-rt-group.rst | 14 2.3 Basis for grouping tasks 44 multiple groups of realtime tasks, each group must be assigned a fixed portion 57 tasks (SCHED_OTHER). Any allocated run time not used will also be picked up by 72 The remaining CPU time will be used for user input and other tasks. Because 73 realtime tasks have explicitly allocated the CPU time they need to perform 74 their tasks, buffer underruns in the graphics or audio can be eliminated. 110 SCHED_OTHER (non-RT tasks). These defaults were chosen so that a run-away 111 realtime tasks will not lock up the machine but leave a little time to recover 120 bandwidth to the group before it will accept realtime tasks. Therefore you will 121 not be able to run realtime tasks as any user other than root until you have [all …]
|
/Linux-v5.10/Documentation/power/ |
D | freezing-of-tasks.rst | 2 Freezing of tasks 7 I. What is the freezing of tasks? 10 The freezing of tasks is a mechanism by which user space processes and some 18 and PF_FREEZER_SKIP (the last one is auxiliary). The tasks that have 30 All freezable tasks must react to that by calling try_to_freeze(), which 62 initiated a freezing operation, the freezing of tasks will fail and the entire 69 order to clear the PF_FROZEN flag for each frozen task. Then, the tasks that 73 Rationale behind the functions dealing with freezing and thawing of tasks 77 - freezes only userspace tasks 80 - freezes all tasks (including kernel threads) because we can't freeze [all …]
|
/Linux-v5.10/kernel/rcu/ |
D | tasks.h | 23 * Definition for a Tasks-RCU-like mechanism. 84 /* Track exiting tasks in order to allow them to be waited for. */ 97 /* RCU tasks grace-period state for debugging. */ 151 // Enqueue a callback for the specified flavor of Tasks RCU. 170 // Wait for a grace period for the specified flavor of Tasks RCU. 181 /* RCU-tasks kthread that detects grace periods and invokes callbacks. */ 196 * one RCU-tasks grace period and then invokes the callbacks. in rcu_tasks_kthread() 244 /* Spawn RCU-tasks grace-period kthread, e.g., at core_initcall() time. */ 258 * Print any non-default Tasks RCU settings. 267 pr_info("\tTrampoline variant of Tasks RCU enabled.\n"); in rcu_tasks_bootup_oddness() [all …]
|
/Linux-v5.10/Documentation/livepatch/ |
D | livepatch.rst | 98 transition state where tasks are converging to the patched state. 100 sequence occurs when a patch is disabled, except the tasks converge from 104 interrupts. The same is true for forked tasks: the child inherits the 108 safe to patch tasks: 111 tasks. If no affected functions are on the stack of a given task, 113 the tasks on the first try. Otherwise it'll keep trying 121 a) Patching I/O-bound user tasks which are sleeping on an affected 124 b) Patching CPU-bound user tasks. If the task is highly CPU-bound 128 3. For idle "swapper" tasks, since they don't ever exit the kernel, they 135 the second approach. It's highly likely that some tasks may still be [all …]
|
/Linux-v5.10/kernel/livepatch/ |
D | transition.c | 30 * "straggler" tasks which failed to transition in the first attempt. 46 * tasks even in userspace and idle. 87 * All tasks have transitioned to KLP_UNPATCHED so we can now in klp_complete_transition() 297 * on other methods (e.g., switching tasks at kernel exit). in klp_try_switch_task() 340 * Sends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set. 348 pr_notice("signaling remaining tasks\n"); in klp_send_signals() 369 * Send fake signal to all non-kthread tasks which are in klp_send_signals() 381 * Try to switch all remaining tasks to the target patch state by walking the 382 * stacks of sleeping tasks and looking for any to-be-patched or 386 * If any tasks are still stuck in the initial patch state, schedule a retry. [all …]
|
/Linux-v5.10/tools/perf/scripts/python/ |
D | sched-migration.py | 100 def __init__(self, tasks = [0], event = RunqueueEventUnknown()): argument 101 self.tasks = tuple(tasks) 107 if taskState(prev_state) == "R" and next in self.tasks \ 108 and prev in self.tasks: 114 next_tasks = list(self.tasks[:]) 115 if prev in self.tasks: 127 if old not in self.tasks: 129 next_tasks = [task for task in self.tasks if task != old] 134 if new in self.tasks: 137 next_tasks = self.tasks[:] + tuple([new]) [all …]
|
/Linux-v5.10/tools/testing/selftests/futex/include/ |
D | futextest.h | 84 * futex_wake() - wake one or more tasks blocked on uaddr 85 * @nr_wake: wake up to this many tasks 106 * futex_wake_bitset() - wake one or more tasks blocked on uaddr with bitset 149 * @nr_wake: wake up to this many tasks 150 * @nr_requeue: requeue up to this many tasks 164 * futex_cmp_requeue() - requeue tasks from uaddr to uaddr2 165 * @nr_wake: wake up to this many tasks 166 * @nr_requeue: requeue up to this many tasks 193 * futex_cmp_requeue_pi() - requeue tasks from uaddr to uaddr2 (PI aware) 196 * @nr_wake: wake up to this many tasks [all …]
|
/Linux-v5.10/Documentation/RCU/ |
D | stallwarn.rst | 105 The RCU, RCU-sched, and RCU-tasks implementations have CPU stall warning. 176 This boot/sysfs parameter controls the RCU-tasks stall warning 177 interval. A value of zero or less suppresses RCU-tasks stall 179 in seconds. An RCU-tasks stall warning starts with the line: 181 INFO: rcu_tasks detected stalls on tasks: 184 task stalling the current RCU-tasks grace period. 190 For non-RCU-tasks flavors of RCU, when a CPU detects that it is stalling, 193 INFO: rcu_sched detected stalls on CPUs/tasks: 201 PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, and that 202 the tasks will be indicated by PID, for example, "P3421". It is even [all …]
|
/Linux-v5.10/kernel/sched/ |
D | psi.c | 10 * When CPU, memory and IO are contended, tasks experience delays that 29 * In the SOME state of a given resource, one or more tasks are 31 * perform work, but the CPU may still be executing other tasks. 33 * In the FULL state of a given resource, all non-idle tasks are 52 * The more tasks and available CPUs there are, the more work can be 55 * tasks and CPUs. 57 * Consider a scenario where 257 number crunching tasks are trying to 65 * Conversely, consider a scenario of 4 tasks and 4 CPUs where at any 66 * given time *one* of the tasks is delayed due to a lack of memory. 73 * we have to base our calculation on the number of non-idle tasks in [all …]
|
/Linux-v5.10/include/linux/ |
D | psi_types.h | 44 * SOME: Stalled tasks & working tasks 45 * FULL: Stalled tasks & no working tasks 70 /* States of the tasks belonging to this group */ 71 unsigned int tasks[NR_PSI_TASK_COUNTS]; member 73 /* Aggregate pressure state derived from the tasks */
|
/Linux-v5.10/Documentation/locking/ |
D | futex-requeue-pi.rst | 5 Requeueing of tasks from a non-PI futex to a PI futex requires 17 pthread_cond_broadcast() must resort to waking all the tasks waiting 47 Once pthread_cond_broadcast() requeues the tasks, the cond->mutex 54 be able to requeue tasks to PI futexes. This support implies that 113 possibly wake the waiting tasks. Internally, this system call is 118 nr_wake+nr_requeue tasks to the PI futex, calling 126 requeue up to nr_wake + nr_requeue tasks. It will wake only as many 127 tasks as it can acquire the lock for, which in the majority of cases
|
/Linux-v5.10/tools/perf/Documentation/ |
D | perf-timechart.txt | 48 --tasks-only:: 60 Print task info for at least given number of tasks. 65 Highlight tasks (using different color) that run more than given 66 duration or tasks with given name. If number is given it's interpreted 89 --tasks-only:: 90 Record only tasks-related events 114 then generate timechart and highlight 'gcc' tasks:
|
/Linux-v5.10/kernel/power/ |
D | process.c | 75 * We need to retry, but first give the freezing tasks some in try_to_freeze_tasks() 90 pr_err("Freezing of tasks %s after %d.%03d seconds " in try_to_freeze_tasks() 91 "(%d tasks refusing to freeze, wq_busy=%d):\n", in try_to_freeze_tasks() 151 * killable tasks. There is no guarantee oom victims will in freeze_processes() 167 * (if any) before thawing the userspace tasks. So, it is the responsibility 168 * of the caller to thaw the userspace tasks, when the time is right. 174 pr_info("Freezing remaining freezable tasks ... "); in freeze_kernel_threads() 202 pr_info("Restarting tasks ... "); in thaw_processes()
|
/Linux-v5.10/include/uapi/linux/ |
D | cgroupstats.h | 33 __u64 nr_sleeping; /* Number of tasks sleeping */ 34 __u64 nr_running; /* Number of tasks running */ 35 __u64 nr_stopped; /* Number of tasks in stopped state */ 36 __u64 nr_uninterruptible; /* Number of tasks in uninterruptible */ 38 __u64 nr_io_wait; /* Number of tasks waiting on IO */
|
/Linux-v5.10/Documentation/x86/ |
D | resctrl_ui.rst | 142 Indicator on Intel systems of how tasks running on threads 188 after mounting, owns all the tasks and cpus in the system and can make 198 directories can be created to monitor subsets of tasks in the CTRL_MON 202 Removing a directory will move all tasks and cpus owned by the group it 208 "tasks": 209 Reading this file shows the list of all tasks that belong to 221 CPUs to/from this group. As with the tasks file a hierarchy is 263 all tasks in the group. In CTRL_MON groups these files provide 264 the sum for all tasks in the CTRL_MON group and all tasks in 682 Tasks that are under the control of group "p0" may only allocate from the [all …]
|
/Linux-v5.10/Documentation/admin-guide/kdump/ |
D | gdbmacros.txt | 17 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 20 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 51 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 83 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 86 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 97 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 106 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) 109 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off) 127 set $next_t=(char *)($next_t->tasks.next) - $tasks_off 139 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks) [all …]
|
/Linux-v5.10/Documentation/x86/x86_64/ |
D | fake-numa-for-cpusets.rst | 14 assign them to cpusets and their attached tasks. This is a way of limiting the 15 amount of system memory that are available to a certain class of tasks. 56 You can now assign tasks to these cpusets to limit the memory resources 59 [root@xroads /exampleset/ddset]# echo $$ > tasks 75 This allows for coarse memory management for the tasks you assign to particular 77 interesting combinations of use-cases for various classes of tasks for your
|
/Linux-v5.10/samples/bpf/ |
D | tracex2_user.c | 86 static struct task tasks[1024]; in print_hist() local 94 if (memcmp(&tasks[i], &next_key, SIZE) == 0) in print_hist() 97 memcpy(&tasks[task_cnt++], &next_key, SIZE); in print_hist() 103 (__u32) tasks[i].pid_tgid, in print_hist() 104 tasks[i].comm, in print_hist() 105 (__u32) tasks[i].uid_gid); in print_hist() 106 print_hist_for_pid(fd, &tasks[i]); in print_hist()
|
/Linux-v5.10/tools/perf/bench/ |
D | futex.h | 50 * futex_wake() - wake one or more tasks blocked on uaddr 51 * @nr_wake: wake up to this many tasks 78 * futex_cmp_requeue() - requeue tasks from uaddr to uaddr2 79 * @nr_wake: wake up to this many tasks 80 * @nr_requeue: requeue up to this many tasks
|