Lines Matching full:pages
15 pages.
30 pages and to hide these pages from vmscan. This mechanism is based on a patch
36 main memory will have over 32 million 4k pages in a single node. When a large
37 fraction of these pages are not evictable for any reason [see below], vmscan
39 of pages that are evictable. This can result in a situation where all CPUs are
43 The unevictable list addresses the following classes of unevictable pages:
51 The infrastructure may also be able to handle other conditions that make pages
71 The Unevictable LRU infrastructure maintains unevictable pages as if they were
74 (1) We get to "treat unevictable pages just like we treat other pages in the
79 (2) We want to be able to migrate unevictable pages between nodes for memory
81 can only migrate pages that it can successfully isolate from the LRU
82 lists (or "Movable" pages: outside of consideration here). If we were to
83 maintain pages elsewhere than on an LRU-like list, where they can be
87 swap-backed pages. This differentiation is only important while the pages are,
103 lru_list enum element). The memory controller tracks the movement of pages to
107 not attempt to reclaim pages on the unevictable list. This has a couple of
110 (1) Because the pages are "hidden" from reclaim on the unevictable list, the
111 reclaim process can be more efficient, dealing only with pages that have a
114 (2) On the other hand, if too many of the pages charged to the control group
125 For facilities such as ramfs none of the pages attached to the address space
126 may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE
149 Note that SHM_LOCK is not required to page in the locked pages if they're
150 swapped out; the application must touch the pages manually if it wants to
158 Detecting Unevictable Pages
169 any special effort to push any pages in the SHM_LOCK'd area to the unevictable
170 list. Instead, vmscan will do this if and when it encounters the pages during
174 the pages in the region and "rescue" them from the unevictable list if no other
176 the pages are also "rescued" from the unevictable list in the process of
179 page_evictable() also checks for mlocked pages by testing an additional page
184 Vmscan's Handling of Unevictable Pages
187 If unevictable pages are culled in the fault path, or moved to the unevictable
188 list at mlock() or mmap() time, vmscan will not encounter the pages until they
193 pages in all of the shrink_{active|inactive|page}_list() functions and will
194 "cull" such pages that it encounters: that is, it diverts those pages to the
198 page is not marked as PG_mlocked. Such pages will make it all the way to
210 MLOCKED Pages
221 The "Unevictable mlocked Pages" infrastructure is based on work originally
222 posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU".
224 to achieve the same objective: hiding mlocked pages from vmscan.
229 of the pages on an LRU list, and thus mlocked pages were not migratable as
233 Nick resolved this by putting mlocked pages back on the LRU list before
243 put to work, without preventing the migration of mlocked pages. This is why
244 the "Unevictable LRU list" cannot be a linked list of pages now; but there was
251 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
252 pages. When such a page has been "noticed" by the memory management subsystem,
257 the LRU. Such pages can be "noticed" by memory management in several places:
272 mlocked pages become unlocked and rescued from the unevictable list when:
297 off a subset of the VMA if the range does not cover the entire VMA. Any pages
302 __mm_populate() to fault in the remaining pages via get_user_pages() and to
303 mark those pages as mlocked as they are faulted.
306 get_user_pages() will be unable to fault in the pages. That's okay. If pages
339 1) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind
341 mlocked. In any case, most of the pages have no struct page in which to so
346 neither need nor want to mlock() these pages. But __mm_populate() includes
347 hugetlbfs ranges, allocating the huge pages and populating the PTEs.
349 3) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages,
350 such as the VDSO page, relay channel pages, etc. These pages are inherently
373 specified range. All pages in the VMA are then munlocked by munlock_page() via
390 Migrating MLOCKED Pages
397 of mlocked pages and other unevictable pages. PG_mlocked is cleared from the
410 before mlocking any pages already present, if one of those pages were migrated
415 To complete page migration, we place the old and new pages back onto the LRU
420 Compacting MLOCKED Pages
424 is to let unevictable pages be moved. /proc/sys/vm/compact_unevictable_allowed
427 flow as described in Migrating MLOCKED Pages will apply.
430 MLOCKING Transparent Huge Pages
443 We handle this by keeping PTE-mlocked huge pages on evictable LRU lists:
462 The mmaped area will still have properties of the locked area - pages will not
468 changes, the kernel simply called make_pages_present() to allocate pages
481 munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages.
482 Before the unevictable/mlock changes, mlocking did not mark the pages in any
501 Truncating MLOCKED Pages
504 File truncation or hole punching forcibly unmaps the deleted pages from
505 userspace; truncation even unmaps and deletes any private anonymous pages
506 which had been Copied-On-Write from the file pages now being truncated.
508 Mlocked pages can be munlocked and deleted in this way: like with munmap(),
514 munlocking by clearing VM_LOCKED from a VMA, before munlocking all the pages
515 present, if one of those pages were unmapped by truncation or hole punch before
527 vmscan's shrink_active_list() culls any obviously unevictable pages -
528 i.e. !page_evictable(page) pages - diverting those to the unevictable list.
529 However, shrink_active_list() only sees unevictable pages that made it onto the
530 active/inactive LRU lists. Note that these pages do not have PageUnevictable
534 Some examples of these unevictable pages on the LRU lists are:
536 (1) ramfs pages that have been placed on the LRU lists when first allocated.
538 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
539 allocate or fault in the pages in the shared memory region. This happens
543 (3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocked,
547 unevictable pages found on the inactive lists to the appropriate memory cgroup
552 check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_page()
553 to correct them. Such pages are culled to the unevictable list when released