Searched refs:split_huge_page (Results 1 – 11 of 11) sorted by relevance
80 calling split_huge_page(page). This is what the Linux VM does before81 it tries to swapout the hugepage for example. split_huge_page() can fail102 split_huge_page() or split_huge_pmd() has a cost.156 split_huge_page internally has to distribute the refcounts in the head160 additional pins (i.e. from get_user_pages). split_huge_page() fails any162 sum of mapcount of all sub-pages plus one (split_huge_page caller must165 split_huge_page uses migration entries to stabilize page->_refcount and
453 We cannot just split the page on partial mlock() as split_huge_page() can
140 static inline int split_huge_page(struct page *page) in split_huge_page() function282 static inline int split_huge_page(struct page *page) in split_huge_page() function
1305 if (!PageAnon(p) || unlikely(split_huge_page(p))) { in memory_failure()1826 if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) { in soft_offline_in_use_page()
375 if (split_huge_page(page)) { in madvise_free_pte_range()
147 ret = split_huge_page(page); in follow_page_pte()302 ret = split_huge_page(page); in follow_pmd_mask()
1654 split_huge_page(page); in madvise_free_huge_pmd()2788 if (!split_huge_page(page)) in deferred_split_scan()2844 if (!split_huge_page(page)) in split_huge_pages_set()
1228 if (split_huge_page(page)) in try_to_merge_one_page()2176 split_huge_page(page); in cmp_and_merge_page()
2175 ret = split_huge_page(page); in migrate_vma_collect_pmd()
523 ret = split_huge_page(page); in shmem_unused_huge_shrink()
465 } else if (!split_huge_page(page)) { in __iommu_dma_alloc_pages()