Lines Matching full:pages

15 pages.
30 pages and to hide these pages from vmscan. This mechanism is based on a patch
36 main memory will have over 32 million 4k pages in a single zone. When a large
37 fraction of these pages are not evictable for any reason [see below], vmscan
39 of pages that are evictable. This can result in a situation where all CPUs are
43 The unevictable list addresses the following classes of unevictable pages:
51 The infrastructure may also be able to handle other conditions that make pages
66 The Unevictable LRU infrastructure maintains unevictable pages on an additional
69 (1) We get to "treat unevictable pages just like we treat other pages in the
74 (2) We want to be able to migrate unevictable pages between nodes for memory
76 can only migrate pages that it can successfully isolate from the LRU
77 lists. If we were to maintain pages elsewhere than on an LRU-like list,
79 migration, unless we reworked migration code to find the unevictable pages
84 swap-backed pages. This differentiation is only important while the pages are,
91 unevictable pages are placed directly on the page's zone's unevictable list
92 under the zone lru_lock. This allows us to prevent the stranding of pages on
106 lru_list enum element). The memory controller tracks the movement of pages to
110 not attempt to reclaim pages on the unevictable list. This has a couple of
113 (1) Because the pages are "hidden" from reclaim on the unevictable list, the
114 reclaim process can be more efficient, dealing only with pages that have a
117 (2) On the other hand, if too many of the pages charged to the control group
128 For facilities such as ramfs none of the pages attached to the address space
129 may be evicted. To prevent eviction of any such pages, the AS_UNEVICTABLE
153 Note that SHM_LOCK is not required to page in the locked pages if they're
154 swapped out; the application must touch the pages manually if it wants to
162 Detecting Unevictable Pages
173 any special effort to push any pages in the SHM_LOCK'd area to the unevictable
174 list. Instead, vmscan will do this if and when it encounters the pages during
178 the pages in the region and "rescue" them from the unevictable list if no other
180 the pages are also "rescued" from the unevictable list in the process of
183 page_evictable() also checks for mlocked pages by testing an additional page
188 Vmscan's Handling of Unevictable Pages
191 If unevictable pages are culled in the fault path, or moved to the unevictable
192 list at mlock() or mmap() time, vmscan will not encounter the pages until they
197 pages in all of the shrink_{active|inactive|page}_list() functions and will
198 "cull" such pages that it encounters: that is, it diverts those pages to the
202 page is not marked as PG_mlocked. Such pages will make it all the way to
214 event and movement of pages onto the unevictable list should be rare, these
219 MLOCKED Pages
230 The "Unevictable mlocked Pages" infrastructure is based on work originally
231 posted by Nick Piggin in an RFC patch entitled "mm: mlocked pages off LRU".
233 to achieve the same objective: hiding mlocked pages from vmscan.
237 prevented the management of the pages on an LRU list, and thus mlocked pages
241 Nick resolved this by putting mlocked pages back on the lru list before
251 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
252 pages. When such a page has been "noticed" by the memory management subsystem,
257 the LRU. Such pages can be "noticed" by memory management in several places:
267 (4) in the fault path, if mlocked pages are "culled" in the fault path,
276 mlocked pages become unlocked and rescued from the unevictable list when:
303 populate_vma_page_range() to fault in the pages via get_user_pages() and to
304 mark the pages as mlocked via mlock_vma_page().
307 get_user_pages() will be unable to fault in the pages. That's okay. If pages
318 detect and cull such pages.
341 1) VMAs with VM_IO or VM_PFNMAP set are skipped entirely. The pages behind
343 mlocked. In any case, most of the pages have no struct page in which to so
348 neither need nor want to mlock() these pages. However, to preserve the
351 allocate the huge pages and populate the ptes.
353 3) VMAs with VM_DONTEXPAND are generally userspace mappings of kernel pages,
354 such as the VDSO page, relay channel pages, etc. These pages
383 faulting in and mlocking pages, get_user_pages() was unreliable for visiting
384 these pages for munlocking. Because we don't want to leave pages mlocked,
386 fetching the pages - all of which should be resident as a result of previous
389 For munlock(), populate_vma_page_range() unlocks individual pages by calling
395 mlocked pages. Note, however, that at this point we haven't checked whether
412 Migrating MLOCKED Pages
419 of mlocked pages and other unevictable pages. This involves simply moving the
427 can skip these pages by testing the page mapping under page lock.
429 To complete page migration, we place the new and old pages back onto the LRU
432 process is released. To ensure that we don't strand pages on the unevictable
434 putback_lru_page() function to add migrated pages back to the LRU.
437 Compacting MLOCKED Pages
445 MLOCKED PAGES will apply.
447 MLOCKING Transparent Huge Pages
460 We handle this by keeping PTE-mapped huge pages on normal LRU lists: the
477 area will still have properties of the locked area - aka. pages will not get
483 changes, the kernel simply called make_pages_present() to allocate pages and
492 populate_vma_page_range() returns the number of pages NOT mlocked. All of the
497 and pages allocated into that region.
505 munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages.
506 Before the unevictable/mlock changes, mlocking did not mark the pages in any
514 actually contain mlocked pages will be passed to munlock_vma_pages_all().
526 Pages can, of course, be mapped into multiple VMAs. Some of these VMAs may
532 in section "vmscan's handling of unevictable pages". To handle this situation,
538 functions handle anonymous and mapped file and KSM pages, as these types of
539 pages have different reverse map lookup mechanisms, with different locking.
551 holepunching, and truncation of file pages and their anonymous COWed pages.
569 mapped file and KSM pages with a flag argument specifying unlock versus unmap
586 shrink_active_list() culls any obviously unevictable pages - i.e.
588 However, shrink_active_list() only sees unevictable pages that made it onto the
589 active/inactive lru lists. Note that these pages do not have PageUnevictable
593 Some examples of these unevictable pages on the LRU lists are:
595 (1) ramfs pages that have been placed on the LRU lists when first allocated.
597 (2) SHM_LOCK'd shared memory pages. shmctl(SHM_LOCK) does not attempt to
598 allocate or fault in the pages in the shared memory region. This happens
602 (3) mlocked pages that could not be isolated from the LRU and moved to the
605 shrink_inactive_list() also diverts any unevictable pages that it finds on the
608 shrink_inactive_list() should only see SHM_LOCK'd pages that became SHM_LOCK'd
609 after shrink_active_list() had moved them to the inactive list, or pages mapped
614 shrink_page_list() again culls obviously unevictable pages that it could
615 encounter for similar reason to shrink_inactive_list(). Pages mapped into