Searched refs:split_huge_page (Results 1 – 9 of 9) sorted by relevance
70 calling split_huge_page(page). This is what the Linux VM does before71 it tries to swapout the hugepage for example. split_huge_page() can fail92 split_huge_page() or split_huge_pmd() has a cost.146 split_huge_page internally has to distribute the refcounts in the head150 additional pins (i.e. from get_user_pages). split_huge_page() fails any152 the sum of mapcount of all sub-pages plus one (split_huge_page caller must155 split_huge_page uses migration entries to stabilize page->_refcount and
451 We cannot just split the page on partial mlock() as split_huge_page() can
190 static inline int split_huge_page(struct page *page) in split_huge_page() function401 static inline int split_huge_page(struct page *page) in split_huge_page() function
357 err = split_huge_page(page); in madvise_cold_or_pageout_pte_range()425 if (split_huge_page(page)) { in madvise_cold_or_pageout_pte_range()636 if (split_huge_page(page)) { in madvise_free_pte_range()
1544 split_huge_page(page); in madvise_free_huge_pmd()2840 if (!split_huge_page(page)) in deferred_split_scan()2895 if (!split_huge_page(page)) in split_huge_pages_all()2986 if (!split_huge_page(page)) in split_huge_pages_pid()3044 if (!split_huge_page(fpage)) in split_huge_pages_in_file()
1219 if (split_huge_page(page)) in try_to_merge_one_page()2183 split_huge_page(page); in cmp_and_merge_page()
631 ret = split_huge_page(page); in shmem_unused_huge_shrink()908 return split_huge_page(page) >= 0; in shmem_punch_compound()1355 if (split_huge_page(page) < 0) in shmem_writepage()
1403 if (!PageAnon(page) || unlikely(split_huge_page(page))) { in try_to_split_thp_page()
2288 ret = split_huge_page(page); in migrate_vma_collect_pmd()