Searched refs:split_huge_page (Results 1 – 11 of 11) sorted by relevance
75 calling split_huge_page(page). This is what the Linux VM does before76 it tries to swapout the hugepage for example. split_huge_page() can fail97 split_huge_page() or split_huge_pmd() has a cost.151 split_huge_page internally has to distribute the refcounts in the head155 additional pins (i.e. from get_user_pages). split_huge_page() fails any157 the sum of mapcount of all sub-pages plus one (split_huge_page caller must160 split_huge_page uses migration entries to stabilize page->_refcount and
457 We cannot just split the page on partial mlock() as split_huge_page() can
166 static inline int split_huge_page(struct page *page) in split_huge_page() function328 static inline int split_huge_page(struct page *page) in split_huge_page() function
347 err = split_huge_page(page); in madvise_cold_or_pageout_pte_range()415 if (split_huge_page(page)) { in madvise_cold_or_pageout_pte_range()621 if (split_huge_page(page)) { in madvise_free_pte_range()
1307 if (!PageAnon(p) || unlikely(split_huge_page(p))) { in memory_failure()1830 if (!PageAnon(page) || unlikely(split_huge_page(page))) { in soft_offline_in_use_page()
247 ret = split_huge_page(page); in follow_page_pte()403 ret = split_huge_page(page); in follow_pmd_mask()
1736 split_huge_page(page); in madvise_free_huge_pmd()2937 if (!split_huge_page(page)) in deferred_split_scan()2994 if (!split_huge_page(page)) in split_huge_pages_set()
1224 if (split_huge_page(page)) in try_to_merge_one_page()2182 split_huge_page(page); in cmp_and_merge_page()
2196 ret = split_huge_page(page); in migrate_vma_collect_pmd()
542 ret = split_huge_page(page); in shmem_unused_huge_shrink()
535 } else if (!split_huge_page(page)) { in __iommu_dma_alloc_pages()