Home
last modified time | relevance | path

Searched refs:tasks (Results 1 – 25 of 253) sorted by relevance

1234567891011

/Linux-v4.19/Documentation/cgroup-v1/
Dfreezer-subsystem.txt2 and stop sets of tasks in order to schedule the resources of a machine
5 whole. The cgroup freezer uses cgroups to describe the set of tasks to
7 a means to start and stop the tasks composing the job.
10 of tasks. The freezer allows the checkpoint code to obtain a consistent
11 image of the tasks by attempting to force the tasks in a cgroup into a
12 quiescent state. Once the tasks are quiescent another task can
14 quiesced tasks. Checkpointed tasks can be restarted later should a
15 recoverable error occur. This also allows the checkpointed tasks to be
17 to another node and restarting the tasks there.
20 and resuming tasks in userspace. Both of these signals are observable
[all …]
Dcpuacct.txt4 The CPU accounting controller is used to group tasks using cgroups and
5 account the CPU usage of these groups of tasks.
8 group accumulates the CPU usage of all of its child groups and the tasks
16 visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in
17 the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup.
19 by this group which is essentially the CPU time obtained by all the tasks
26 # echo $$ > g1/tasks
37 user: Time spent by tasks of the cgroup in user mode.
38 system: Time spent by tasks of the cgroup in kernel mode.
Dcgroups.txt41 tasks, and all their future children, into hierarchical groups with
46 A *cgroup* associates a set of tasks with a set of parameters for one
50 facilities provided by cgroups to treat groups of tasks in
63 cgroups. Each hierarchy is a partition of all tasks in the system.
77 tasks in each cgroup.
96 the division of tasks into cgroups is distinctly different for
98 hierarchy to be a natural division of tasks, without having to handle
99 complex combinations of tasks that would be present if several
110 tasks etc. The resource planning for this server could be along the
119 In addition (system tasks) are attached to topcpuset (so
[all …]
Dhugetlb.txt16 visible at /sys/fs/cgroup. At bootup, this group includes all the tasks in
17 the system. /sys/fs/cgroup/tasks lists the tasks in this cgroup.
23 # echo $$ > g1/tasks
Dmemcg_test.txt160 /bin/echo $pid >$2/tasks 2>/dev/null
167 G1_TASK=`cat ${G1}/tasks`
168 G2_TASK=`cat ${G2}/tasks`
216 # echo 0 > /cgroup/test/tasks
219 # move all tasks in /cgroup/test to /cgroup
227 Out-of-memory caused by memcg's limit will kill tasks under
230 In this case, panic_on_oom shouldn't be invoked and tasks
249 #echo $$ >/cgroup/A/tasks
255 #echo "pid of the program running in group A" >/cgroup/B/tasks
271 # echo $$ >/cgroup/A/tasks
Dcpusets.txt41 Nodes to a set of tasks. In this document "Memory Node" refers to
44 Cpusets constrain the CPU and Memory placement of tasks to only
79 the available CPU and Memory resources amongst the requesting tasks.
136 - You can list all the tasks (by pid) attached to any cpuset.
145 - in sched.c migrate_live_tasks(), to keep migrating tasks within
181 - cpuset.sched_relax_domain_level: the searching range when migrating tasks
188 CPUs and Memory Nodes, and attached tasks, are modified by writing
196 on a system into related sets of tasks such that each set is constrained
202 the detailed placement done on individual tasks and memory regions
249 of the rate that the tasks in a cpuset are attempting to free up in
[all …]
/Linux-v4.19/samples/bpf/
Dtracex2_user.c84 static struct task tasks[1024]; in print_hist() local
92 if (memcmp(&tasks[i], &next_key, SIZE) == 0) in print_hist()
95 memcpy(&tasks[task_cnt++], &next_key, SIZE); in print_hist()
101 (__u32) tasks[i].pid_tgid, in print_hist()
102 tasks[i].comm, in print_hist()
103 (__u32) tasks[i].uid_gid); in print_hist()
104 print_hist_for_pid(fd, &tasks[i]); in print_hist()
Dmap_perf_test_user.c93 static int pre_test_lru_hash_lookup(int tasks) in pre_test_lru_hash_lookup() argument
291 typedef int (*pre_test_func)(int tasks);
311 static int pre_test(int tasks) in pre_test() argument
317 int ret = pre_test_funcs[i](tasks); in pre_test()
342 static void run_perf_test(int tasks) in run_perf_test() argument
344 pid_t pid[tasks]; in run_perf_test()
347 assert(!pre_test(tasks)); in run_perf_test()
349 for (i = 0; i < tasks; i++) { in run_perf_test()
359 for (i = 0; i < tasks; i++) { in run_perf_test()
Dtest_overhead_user.c98 static void run_perf_test(int tasks, int flags) in run_perf_test() argument
100 pid_t pid[tasks]; in run_perf_test()
103 for (i = 0; i < tasks; i++) { in run_perf_test()
113 for (i = 0; i < tasks; i++) { in run_perf_test()
/Linux-v4.19/Documentation/scheduler/
Dsched-design-CFS.txt18 1/nr_running speed. For example: if there are 2 tasks running, then it runs
25 is its actual runtime normalized to the total number of running tasks.
35 [ small detail: on "ideal" hardware, at any time all tasks would have the same
36 p->se.vruntime value --- i.e., tasks would execute simultaneously and no task
42 up CPU time between runnable tasks as close to "ideal multitasking hardware" as
59 increasing value tracking the smallest vruntime among all tasks in the
64 The total number of running tasks in the runqueue is accounted through the
65 rq->cfs.load value, which is the sum of the weights of the tasks queued on the
68 CFS maintains a time-ordered rbtree, where all runnable tasks are sorted by the
70 As the system progresses forwards, the executed tasks are put into the tree
[all …]
Dsched-deadline.txt43 that makes it possible to isolate the behavior of tasks between each other.
53 "deadline", to schedule tasks. A SCHED_DEADLINE task should receive
65 Summing up, the CBS[2,3] algorithm assigns scheduling deadlines to tasks so
67 interference between different tasks (bandwidth isolation), while the EDF[1]
69 to be executed next. Thanks to this feature, tasks that do not strictly comply
74 tasks in the following way:
128 Bandwidth reclaiming for deadline tasks is based on the GRUB (Greedy
132 The following diagram illustrates the state names for tasks handled by GRUB:
201 tasks in active state (i.e., ActiveContending or ActiveNonContending);
203 - Total bandwidth (this_bw): this is the sum of all tasks "belonging" to the
[all …]
Dsched-rt-group.txt14 2.3 Basis for grouping tasks
44 multiple groups of realtime tasks, each group must be assigned a fixed portion
57 tasks (SCHED_OTHER). Any allocated run time not used will also be picked up by
72 The remaining CPU time will be used for user input and other tasks. Because
73 realtime tasks have explicitly allocated the CPU time they need to perform
74 their tasks, buffer underruns in the graphics or audio can be eliminated.
110 SCHED_OTHER (non-RT tasks). These defaults were chosen so that a run-away
111 realtime tasks will not lock up the machine but leave a little time to recover
120 bandwidth to the group before it will accept realtime tasks. Therefore you will
121 not be able to run realtime tasks as any user other than root until you have
[all …]
/Linux-v4.19/drivers/gpu/drm/
Ddrm_flip_work.c114 struct list_head tasks; in flip_worker() local
120 INIT_LIST_HEAD(&tasks); in flip_worker()
122 list_splice_tail(&work->commited, &tasks); in flip_worker()
126 if (list_empty(&tasks)) in flip_worker()
129 list_for_each_entry_safe(task, tmp, &tasks, node) { in flip_worker()
/Linux-v4.19/Documentation/power/
Dfreezing-of-tasks.txt1 Freezing of tasks
4 I. What is the freezing of tasks?
6 The freezing of tasks is a mechanism by which user space processes and some
13 and PF_FREEZER_SKIP (the last one is auxiliary). The tasks that have
25 All freezable tasks must react to that by calling try_to_freeze(), which
57 initiated a freezing operation, the freezing of tasks will fail and the entire
64 order to clear the PF_FROZEN flag for each frozen task. Then, the tasks that
68 Rationale behind the functions dealing with freezing and thawing of tasks:
72 - freezes only userspace tasks
75 - freezes all tasks (including kernel threads) because we can't freeze
[all …]
/Linux-v4.19/drivers/isdn/hardware/eicon/
Dos_4bri.c155 int tasks = _4bri_is_rev_2_bri_card(a->CardOrdinal) ? 1 : MQ_INSTANCE_COUNT; in diva_4bri_init_card() local
156 int factor = (tasks == 1) ? 1 : 2; in diva_4bri_init_card()
170 bar_length[2], tasks, factor)) in diva_4bri_init_card()
261 if (tasks > 1) { in diva_4bri_init_card()
302 for (i = 0; i < (tasks - 1); i++) { in diva_4bri_init_card()
315 for (i = 0; i < tasks; i++) { in diva_4bri_init_card()
317 adapter_list[i]->xdi_adapter.tasks = tasks; in diva_4bri_init_card()
322 for (i = 0; i < tasks; i++) { in diva_4bri_init_card()
347 for (i = 1; i < (tasks - 1); i++) { in diva_4bri_init_card()
358 for (i = 1; i < (tasks - 1); i++) { in diva_4bri_init_card()
[all …]
Ds_4bri.c52 int factor = (IoAdapter->tasks == 1) ? 1 : 2; in qBri_cpu_trapped()
394 for (i = 0; i < IoAdapter->tasks; ++i) in qBri_ISR()
468 if (!IoAdapter->tasks) { in set_qBri_functions()
469 IoAdapter->tasks = MQ_INSTANCE_COUNT; in set_qBri_functions()
477 if (!IoAdapter->tasks) { in set_qBri2_functions()
478 IoAdapter->tasks = MQ_INSTANCE_COUNT; in set_qBri2_functions()
480 IoAdapter->MemorySize = (IoAdapter->tasks == 1) ? BRI2_MEMORY_SIZE : MQ2_MEMORY_SIZE; in set_qBri2_functions()
497 if (!IoAdapter->tasks) { in prepare_qBri2_functions()
498 IoAdapter->tasks = MQ_INSTANCE_COUNT; in prepare_qBri2_functions()
502 if (IoAdapter->tasks > 1) { in prepare_qBri2_functions()
/Linux-v4.19/tools/perf/scripts/python/
Dsched-migration.py102 def __init__(self, tasks = [0], event = RunqueueEventUnknown()): argument
103 self.tasks = tuple(tasks)
109 if taskState(prev_state) == "R" and next in self.tasks \
110 and prev in self.tasks:
116 next_tasks = list(self.tasks[:])
117 if prev in self.tasks:
129 if old not in self.tasks:
131 next_tasks = [task for task in self.tasks if task != old]
136 if new in self.tasks:
139 next_tasks = self.tasks[:] + tuple([new])
[all …]
/Linux-v4.19/Documentation/kdump/
Dgdbmacros.txt17 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks)
20 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off)
51 set $next_t=(char *)($next_t->tasks.next) - $tasks_off
83 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks)
86 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off)
97 set $next_t=(char *)($next_t->tasks.next) - $tasks_off
106 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks)
109 set $next_t=(((char *)($init_t->tasks).next) - $tasks_off)
127 set $next_t=(char *)($next_t->tasks.next) - $tasks_off
139 set $tasks_off=((size_t)&((struct task_struct *)0)->tasks)
[all …]
/Linux-v4.19/tools/perf/Documentation/
Dperf-timechart.txt48 --tasks-only::
60 Print task info for at least given number of tasks.
65 Highlight tasks (using different color) that run more than given
66 duration or tasks with given name. If number is given it's interpreted
89 --tasks-only::
90 Record only tasks-related events
114 then generate timechart and highlight 'gcc' tasks:
/Linux-v4.19/net/sunrpc/
Dsched.c104 struct list_head *q = &queue->tasks[queue->priority]; in rpc_rotate_queue_owner()
150 q = &queue->tasks[queue_priority]; in __rpc_add_wait_queue_priority()
179 list_add(&task->u.tk_wait.list, &queue->tasks[0]); in __rpc_add_wait_queue()
181 list_add_tail(&task->u.tk_wait.list, &queue->tasks[0]); in __rpc_add_wait_queue()
226 for (i = 0; i < ARRAY_SIZE(queue->tasks); i++) in __rpc_init_priority_wait_queue()
227 INIT_LIST_HEAD(&queue->tasks[i]); in __rpc_init_priority_wait_queue()
495 q = &queue->tasks[queue->priority]; in __rpc_find_next_queued_priority()
513 if (q == &queue->tasks[0]) in __rpc_find_next_queued_priority()
514 q = &queue->tasks[queue->maxpriority]; in __rpc_find_next_queued_priority()
521 } while (q != &queue->tasks[queue->priority]); in __rpc_find_next_queued_priority()
[all …]
/Linux-v4.19/Documentation/x86/x86_64/
Dfake-numa-for-cpusets7 assign them to cpusets and their attached tasks. This is a way of limiting the
8 amount of system memory that are available to a certain class of tasks.
49 You can now assign tasks to these cpusets to limit the memory resources
52 [root@xroads /exampleset/ddset]# echo $$ > tasks
64 This allows for coarse memory management for the tasks you assign to particular
66 interesting combinations of use-cases for various classes of tasks for your
/Linux-v4.19/Documentation/
Dfutex-requeue-pi.txt5 Requeueing of tasks from a non-PI futex to a PI futex requires
17 pthread_cond_broadcast() must resort to waking all the tasks waiting
47 Once pthread_cond_broadcast() requeues the tasks, the cond->mutex
54 be able to requeue tasks to PI futexes. This support implies that
113 possibly wake the waiting tasks. Internally, this system call is
118 nr_wake+nr_requeue tasks to the PI futex, calling
126 requeue up to nr_wake + nr_requeue tasks. It will wake only as many
127 tasks as it can acquire the lock for, which in the majority of cases
/Linux-v4.19/Documentation/livepatch/
Dlivepatch.txt98 transition state where tasks are converging to the patched state.
100 sequence occurs when a patch is disabled, except the tasks converge from
104 interrupts. The same is true for forked tasks: the child inherits the
108 safe to patch tasks:
111 tasks. If no affected functions are on the stack of a given task,
113 the tasks on the first try. Otherwise it'll keep trying
121 a) Patching I/O-bound user tasks which are sleeping on an affected
124 b) Patching CPU-bound user tasks. If the task is highly CPU-bound
128 3. For idle "swapper" tasks, since they don't ever exit the kernel, they
135 the second approach. It's highly likely that some tasks may still be
[all …]
/Linux-v4.19/Documentation/RCU/
Dstallwarn.txt90 The RCU, RCU-sched, RCU-bh, and RCU-tasks implementations have CPU stall
156 This boot/sysfs parameter controls the RCU-tasks stall warning
157 interval. A value of zero or less suppresses RCU-tasks stall
159 in jiffies. An RCU-tasks stall warning starts with the line:
161 INFO: rcu_tasks detected stalls on tasks:
164 task stalling the current RCU-tasks grace period.
169 For non-RCU-tasks flavors of RCU, when a CPU detects that it is stalling,
172 INFO: rcu_sched detected stalls on CPUs/tasks:
180 PREEMPT_RCU builds can be stalled by tasks as well as by CPUs, and that
181 the tasks will be indicated by PID, for example, "P3421". It is even
[all …]
/Linux-v4.19/Documentation/namespaces/
Dcompatibility-list.txt4 may have when creating tasks living in different namespaces.
7 occur when tasks share some namespace (the columns) while living
22 In both cases, tasks shouldn't try exposing this ID to some

1234567891011