Lines Matching +full:shared +full:- +full:memory

4 NUMA Memory Policy
7 What is NUMA Memory Policy?
10 In the Linux kernel, "memory policy" determines from which node the kernel will
11 allocate memory in a NUMA system or in an emulated NUMA system. Linux has
12 supported platforms with Non-Uniform Memory Access architectures since 2.4.?.
13 The current memory policy support was added to Linux 2.6 around May 2004. This
14 document attempts to describe the concepts and APIs of the 2.6 memory policy
17 Memory policies should not be confused with cpusets
18 (``Documentation/admin-guide/cgroup-v1/cpusets.rst``)
20 memory may be allocated by a set of processes. Memory policies are a
21 programming interface that a NUMA-aware application can take advantage of. When
23 takes priority. See :ref:`Memory Policies and cpusets <mem_pol_and_cpusets>`
26 Memory Policy Concepts
29 Scope of Memory Policies
30 ------------------------
32 The Linux kernel supports _scopes_ of memory policy, described here from
42 allocations across all nodes with "sufficient" memory, so as
43 not to overload the initial boot node with boot-time
47 this is an optional, per-task policy. When defined for a
58 executable image that has no awareness of memory policy. See the
59 :ref:`Memory Policy APIs <memory_policy_apis>` section,
63 In a multi-threaded task, task policies apply only to the thread
77 A "VMA" or "Virtual Memory Area" refers to a range of a task's
80 :ref:`Memory Policy APIs <memory_policy_apis>` section,
100 mapping-- i.e., at Copy-On-Write.
102 * VMA policies are shared between all tasks that share a
103 virtual address space--a.k.a. threads--independent of when
108 are NOT inheritable across exec(). Thus, only NUMA-aware
111 * A task may install a new VMA policy on a sub-range of a
113 the existing virtual memory area into 2 or 3 VMAs, each with
124 Shared Policy
125 Conceptually, shared policies apply to "memory objects" mapped
126 shared into one or more tasks' distinct address spaces. An
127 application installs shared policies the same way as VMA
128 policies--using the mbind() system call specifying a range of
129 virtual addresses that map the shared object. However, unlike
131 range of a task's address space, shared policies apply
132 directly to the shared object. Thus, all tasks that attach to
134 shared object, by any task, will obey the shared policy.
136 As of 2.6.22, only shared memory segments, created by shmget() or
137 mmap(MAP_ANONYMOUS|MAP_SHARED), support shared policy. When shared
140 support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
141 shmem segments were never "hooked up" to the shared policy support.
143 for shared policy has not been completed.
148 address range backed by the shared file mapping. Rather,
149 shared page cache pages, including pages backing private
153 The shared policy infrastructure supports different policies on subset
154 ranges of the shared object. However, Linux still splits the VMA of
156 Thus, different tasks that attach to a shared memory segment can have
157 different VMA configurations mapping that one shared object. This
159 a shared memory region, when one task has installed shared policy on
162 Components of Memory Policies
163 -----------------------------
165 A NUMA memory policy consists of a "mode", optional mode flags, and
171 Internally, memory policies are implemented by a reference counted
175 NUMA memory policy supports the following 4 behavioral modes:
177 Default Mode--MPOL_DEFAULT
178 This mode is only used in the memory policy APIs. Internally,
179 MPOL_DEFAULT is converted to the NULL memory policy in all
180 policy scopes. Any existing non-default policy will simply be
189 When specified in one of the memory policy APIs, the Default mode
193 be non-empty.
196 This mode specifies that memory must come from the set of
197 nodes specified by the policy. Memory will be allocated from
198 the node in the set with sufficient free memory that is
208 Internally, the Preferred policy uses a single node--the
228 For allocation of anonymous pages and shared memory pages,
251 a memory pressure on all nodes in the nodemask, the allocation
255 NUMA memory policy supports the following optional mode flags:
260 nodes changes after the memory policy has been defined.
268 With this flag, if the user-specified nodes overlap with the
269 nodes allowed by the task's cpuset, then the memory policy is
274 mems 1-3 that sets an Interleave policy over the same set. If
275 the cpuset's mems change to 3-5, the Interleave will now occur
289 set of allowed nodes. The kernel stores the user-passed nodemask,
299 1,3,5 may be remapped to 7-9 and then to 1-3 if the set of
312 the user's nodemask when the set of allowed nodes is only 0-3),
317 mems 2-5 that sets an Interleave policy over the same set with
318 MPOL_F_RELATIVE_NODES. If the cpuset's mems change to 3-7, the
319 interleave now occurs over nodes 3,5-7. If the cpuset's mems
320 then change to 0,2-3,5, then the interleave occurs over nodes
321 0,2-3,5.
324 nodemasks to specify memory policies using this flag should
325 disregard their current, actual cpuset imposed memory placement
327 memory nodes 0 to N-1, where N is the number of memory nodes the
329 set of memory nodes allowed by the task's cpuset, as that may
337 Memory Policy Reference Counting
346 When a new memory policy is allocated, its reference count is initialized
348 new policy. When a pointer to a memory policy structure is stored in another
352 During run-time "usage" of the policy, we attempt to minimize atomic operations
385 4) Shared policies require special consideration. One task can replace a
386 shared memory policy while another task, with a distinct mmap_lock, is
388 potential race, the shared policy infrastructure adds an extra reference
389 to the shared policy during lookup while holding a spin lock on the shared
392 extra reference on shared policies in the same query/allocation paths
393 used for non-shared policies. For this reason, shared policies are marked
394 as such, and the extra reference is dropped "conditionally"--i.e., only
395 for shared policies.
398 shared policies in a tree structure under spinlock, shared policies are
400 true for shared policies on shared memory regions shared by tasks running
402 falling back to task or system default policy for shared memory regions,
403 or by prefaulting the entire shared memory region into memory and locking
408 Memory Policy APIs
411 Linux supports 4 system calls for controlling memory policy. These APIS
413 some shared object mapped into the calling task's address space.
422 Set [Task] Memory Policy::
427 Set's the calling task's "task/process memory policy" to mode
437 Get [Task] Memory Policy or Related Information::
443 Queries the "task/process memory policy" of the calling task, or the
450 Install VMA/Shared Policy for a Range of Task's Address Space::
473 the default allocation policy to allocate memory close to the local node for an
477 Memory Policy Command Line Interface
480 Although not strictly part of the Linux implementation of memory policy,
486 + set the shared policy for a shared memory segment via mbind(2)
488 The numactl(8) tool is packaged with the run-time version of the library
489 containing the memory policy system call wrappers. Some distributions
490 package the headers and compile-time libraries in a separate development
495 Memory Policies and cpusets
498 Memory policies work within cpusets as described above. For memory policies
503 specified for the policy and the set of nodes with memory is used. If the
508 The interaction of memory policies and cpusets can be problematic when tasks
509 in two cpusets share access to a memory region, such as shared memory segments
511 any of the tasks install shared policy on the region, only nodes whose
513 this information requires "stepping outside" the memory policy APIs to use the
515 be attaching to the shared region. Furthermore, if the cpusets' allowed
516 memory sets are disjoint, "local" allocation is the only valid policy.