Lines Matching +full:s +full:- +full:mode
12 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
18 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
21 programming interface that a NUMA-aware application can take advantage of. When
30 ------------------------
43 not to overload the initial boot node with boot-time
47 this is an optional, per-task policy. When defined for a
63 In a multi-threaded task, task policies apply only to the thread
77 A "VMA" or "Virtual Memory Area" refers to a range of a task's
85 this region of the address space. Any regions of the task's
100 mapping-- i.e., at Copy-On-Write.
103 virtual address space--a.k.a. threads--independent of when
106 region of a task's address space, and because the address
108 are NOT inheritable across exec(). Thus, only NUMA-aware
111 * A task may install a new VMA policy on a sub-range of a
114 it's own policy.
128 policies--using the mbind() system call specifying a range of
131 range of a task's address space, shared policies apply
140 support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
163 -----------------------------
165 A NUMA memory policy consists of a "mode", optional mode flags, and
166 an optional set of nodes. The mode determines the behavior of the
167 policy, the optional mode flags determine the behavior of the mode,
177 Default Mode--MPOL_DEFAULT
178 This mode is only used in the memory policy APIs. Internally,
180 policy scopes. Any existing non-default policy will simply be
189 When specified in one of the memory policy APIs, the Default mode
193 be non-empty.
196 This mode specifies that memory must come from the set of
202 This mode specifies that the allocation should be attempted
208 Internally, the Preferred policy uses a single node--the
210 mode flag MPOL_F_LOCAL is set, the preferred_node is ignored
218 mode. If an empty nodemask is passed, the policy cannot use
223 This mode specifies that page allocations be interleaved, on a
225 This mode also behaves slightly differently, based on the
229 Interleave mode indexes the set of nodes specified by the
238 For allocation of page cache pages, Interleave mode indexes
246 interleaved system default policy works in this mode.
249 This mode specifices that the allocation should be preferrably
255 NUMA memory policy supports the following optional mode flags:
259 the user should not be remapped if the task or VMA's set of allowed
268 With this flag, if the user-specified nodes overlap with the
269 nodes allowed by the task's cpuset, then the memory policy is
274 mems 1-3 that sets an Interleave policy over the same set. If
275 the cpuset's mems change to 3-5, the Interleave will now occur
277 3 is allowed from the user's nodemask, the "interleave" only
278 occurs over that node. If no nodes from the user's nodemask are
288 by the user will be mapped relative to the set of the task or VMA's
289 set of allowed nodes. The kernel stores the user-passed nodemask,
297 preserve the relative nature of the user's passed nodemask to its
299 1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
303 the user's passed nodemask are relative to the set of allowed
304 nodes. In other words, if nodes 0, 2, and 4 are set in the user's
308 relative to task or VMA's set of allowed nodes.
310 If the user's nodemask includes nodes that are outside the range
312 the user's nodemask when the set of allowed nodes is only 0-3),
317 mems 2-5 that sets an Interleave policy over the same set with
318 MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the
319 interleave now occurs over nodes 3,5-7. If the cpuset's mems
320 then change to 0,2-3,5, then the interleave occurs over nodes
321 0,2-3,5.
327 memory nodes 0 to N-1, where N is the number of memory nodes the
329 set of memory nodes allowed by the task's cpuset, as that may
349 structure, another reference is added, as the task's reference will be dropped
352 During run-time "usage" of the policy, we attempt to minimize atomic operations
360 2) examination of the policy to determine the policy mode and associated node
373 target task's task policy nor vma policies because we always acquire the
374 task's mm's mmap_lock for read during the query. The set_mempolicy() and
393 used for non-shared policies. For this reason, shared policies are marked
394 as such, and the extra reference is dropped "conditionally"--i.e., only
412 always affect only the calling task, the calling task's address space, or
413 some shared object mapped into the calling task's address space.
419 prefix, are defined in <linux/syscalls.h>; the mode and flag
424 long set_mempolicy(int mode, const unsigned long *nmask,
427 Set's the calling task's "task/process memory policy" to mode
428 specified by the 'mode' argument and the set of nodes defined by
430 'maxnode' ids. Optional mode flags may be passed by combining the
431 'mode' argument with the flag (for example: MPOL_INTERLEAVE |
439 long get_mempolicy(int *mode,
450 Install VMA/Shared Policy for a Range of Task's Address Space::
452 long mbind(void *start, unsigned long len, int mode,
456 mbind() installs the policy specified by (mode, nmask, maxnodes) as a
457 VMA policy for the range of the calling task's address space specified
463 Set home node for a Range of Task's Address Spacec::
470 task's address range. The system call updates the home node only for the existing
488 The numactl(8) tool is packaged with the run-time version of the library
490 package the headers and compile-time libraries in a separate development
505 installed. If MPOL_F_RELATIVE_NODES is used, the policy's nodes are mapped
506 onto and folded into the task's set of allowed nodes as previously described.