Lines Matching +full:memory +full:- +full:mapped

13 This document describes the Linux memory manager's "Unevictable LRU"
21 details - the "what does it do?" - by reading the code. One hopes that the
33 memory x86_64 systems.
35 To illustrate this with an example, a non-NUMA x86_64 platform with 128GB of
36 main memory will have over 32 million 4k pages in a single node. When a large
47 * Those mapped into SHM_LOCK'd shared memory regions.
49 * Those mapped into VM_LOCKED [mlock()ed] VMAs.
56 -----------------------------
58 The Unevictable LRU page list is a lie. It was never an LRU-ordered list, but a
59 companion to the LRU-ordered anonymous and file, active and inactive page lists;
63 The Unevictable LRU infrastructure consists of an additional, per-node, LRU list
75 system - which means we get to use the same code to manipulate them, the
79 (2) We want to be able to migrate unevictable pages between nodes for memory
80 defragmentation, workload management and memory hotplug. The Linux kernel
83 maintain pages elsewhere than on an LRU-like list, where they can be
86 The unevictable list does not differentiate between file-backed and anonymous,
87 swap-backed pages. This differentiation is only important while the pages are,
90 The unevictable list benefits from the "arrayification" of the per-node LRU
94 Memory Control Group Interaction
95 --------------------------------
97 The unevictable LRU facility interacts with the memory control group [aka
98 memory controller; see Documentation/admin-guide/cgroup-v1/memory.rst] by
101 The memory controller data structure automatically gets a per-node unevictable
102 list as a result of the "arrayification" of the per-node LRU lists (one per
103 lru_list enum element). The memory controller tracks the movement of pages to
106 When a memory control group comes under memory pressure, the controller will
116 the control group may not fit into the available memory. This can cause
117 the control group to thrash or to OOM-kill tasks.
123 ----------------------------------
151 ensure they're in memory.
154 amount of unevictable memory marked by i915 driver is roughly the bounded
159 ---------------------------
185 --------------------------------------
195 unevictable list for the memory cgroup and node being scanned.
197 There may be situations where a page is mapped into a VM_LOCKED VMA, but the
204 using putback_lru_page() - the inverse operation to isolate_lru_page() - after
219 -------
237 other VM_LOCKED VMAs still mapped the page.
245 no use for that linked list anyway - though its size is maintained for meminfo.
249 ----------------
251 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
252 pages. When such a page has been "noticed" by the memory management subsystem,
257 the LRU. Such pages can be "noticed" by memory management in several places:
274 (1) mapped in a range unlocked via the munlock()/munlockall() system calls;
286 ------------------------------------------------
291 is used for both mlocking and munlocking a range of memory. A call to mlock()
293 treated as a no-op and mlock_fixup() simply returns.
305 Note that the VMA being mlocked might be mapped with PROT_NONE. In this case,
308 fault path - which is also how mlock2()'s MLOCK_ONFAULT areas are handled.
335 ----------------------
345 2) VMAs mapping hugetlbfs page are already effectively pinned into memory. We
364 -------------------------------------------
374 mlock_pte_range() via walk_page_range() via mlock_vma_pages_range() - the same
391 -----------------------
399 new page is mapped in place of migration entry in a VM_LOCKED VMA. If the page
416 afterwards. The "unneeded" page - old page on success, new page on failure -
421 ------------------------
423 The memory map can be scanned for compactable regions and the default behavior
425 controls this behavior (see Documentation/admin-guide/sysctl/vm.rst). The work
431 -------------------------------
443 We handle this by keeping PTE-mlocked huge pages on evictable LRU lists:
446 This way the huge page is accessible for vmscan. Under memory pressure the
451 of a transparent huge page which are mapped only by PTEs in VM_LOCKED VMAs.
455 -------------------------------------
458 can request that a region of memory be mlocked by supplying the MAP_LOCKED flag
462 The mmaped area will still have properties of the locked area - pages will not
463 get swapped out - but major page faults to fault memory in might still happen.
467 in the newly mapped memory being mlocked. Before the unevictable/mlock
471 To mlock a range of memory under the unevictable/mlock infrastructure,
477 -------------------------------------------
479 When unmapping an mlocked region of memory, whether by an explicit call to
502 ------------------------
506 which had been Copied-On-Write from the file pages now being truncated.
525 -------------------------------
527 vmscan's shrink_active_list() culls any obviously unevictable pages -
528 i.e. !page_evictable(page) pages - diverting those to the unevictable list.
531 set - otherwise they would be on the unevictable list and shrink_active_list()
538 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
539 allocate or fault in the pages in the shared memory region. This happens
543 (3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocked,
547 unevictable pages found on the inactive lists to the appropriate memory cgroup
552 check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page()