Lines Matching refs:VMA
197 There may be situations where a page is mapped into a VM_LOCKED VMA, but the
247 mlocked pages - pages mapped into a VM_LOCKED VMA - are a class of unevictable
267 reclaim a page in a VM_LOCKED VMA via try_to_unmap()
269 all of which result in the VM_LOCKED flag being set for the VMA if it doesn't
276 (2) munmap()'d out of the last VM_LOCKED VMA that maps the page, including
279 (3) when the page is truncated from the last VM_LOCKED VMA of an mmapped file;
282 (4) before a page is COW'd in a VM_LOCKED VMA.
289 for each VMA in the range specified by the call. In the case of mlockall(),
292 an already VM_LOCKED VMA, or to munlock() a VMA that is not VM_LOCKED is
295 If the VMA passes some filtering as described in "Filtering Special Vmas"
296 below, mlock_fixup() will attempt to merge the VMA with its neighbors or split
297 off a subset of the VMA if the range does not cover the entire VMA. Once the
298 VMA has been merged or split or neither, mlock_fixup() will call
302 Note that the VMA being mlocked might be mapped with PROT_NONE. In this case,
304 do end up getting faulted into this VM_LOCKED VMA, we'll handle them in the
312 In the worst case, this will result in a page mapped in a VM_LOCKED VMA
318 be mlocked by another task/VMA and we don't want to do extra work. We
346 mlock_fixup() will call make_pages_present() in the hugetlbfs VMA range to
368 handled by mlock_fixup(). Again, if called for an already munlocked VMA,
369 mlock_fixup() simply returns. Because of the VMA filtering discussed above,
373 If the VMA is VM_LOCKED, mlock_fixup() again attempts to merge or split off the
375 populate_vma_page_range() - the same function used to mlock a VMA range -
378 Because the VMA access protections could have been changed to PROT_NONE after
402 page statistics if it finds another VMA holding the page mlocked. If we fail
457 PMD on border of VM_LOCKED VMA will be split into PTE table.
491 attempting to fault in a VMA with PROT_NONE access. In this case, we leave the
501 munlock the pages if we're removing the last VM_LOCKED VMA that maps the pages.
508 specifies the entire VMA range when munlock()ing during unmap of a region.
509 Because of the VMA filtering when mlocking() regions, only "normal" VMAs that
512 munlock_vma_pages_all() clears the VM_LOCKED VMA flag and, like mlock_fixup()
514 for the VMA's memory range and munlock_vma_page() each resident page mapped by
515 the VMA. This effectively munlocks the page, only if this is the last
516 VM_LOCKED VMA that maps the page.
537 it will call try_to_unmap_one() for every VMA which might contain the page.
540 VMA, it will then mlock the page via mlock_vma_page() instead of unmapping it,
560 VM_LOCKED VMA without actually attempting to unmap all PTEs from the
567 for VM_LOCKED VMAs. When such a VMA is found, as in the try_to_unmap() case,
571 Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's
572 reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA.
573 However, the scan can terminate when it encounters a VM_LOCKED VMA.