Lines Matching +full:sub +full:- +full:components

14 - "graceful fallback": mm components which don't have transparent hugepage
16 if necessary, split a transparent hugepage. Therefore these components
19 - if a hugepage allocation fails because of memory fragmentation,
24 - if some task quits and more hugepages become available (either
29 - it doesn't require memory reservation and in turn it uses hugepages
45 page (like for checking page->mapping or other bits that are relevant
77 diff --git a/mm/mremap.c b/mm/mremap.c
78 --- a/mm/mremap.c
80 @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
101 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
115 - get_page()/put_page() and GUP operate on head page's ->_refcount.
117 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
120 - map/unmap of the pages with PTE entry increment/decrement ->_mapcount
121 on relevant sub-page of the compound page.
123 - map/unmap of the whole compound page is accounted for in compound_mapcount
125 ->_mapcount of all sub-pages in order to have race-free detection of
130 For anonymous pages, PageDoubleMap() also indicates ->_mapcount in all
132 get race-free detection of unmap of subpages when we have them mapped with
135 This optimization is required to lower the overhead of per-subpage mapcount
136 tracking. The alternative is to alter ->_mapcount in all subpages on each
152 the sum of mapcount of all sub-pages plus one (split_huge_page caller must
155 split_huge_page uses migration entries to stabilize page->_refcount and
156 page->_mapcount of anonymous pages. File pages just get unmapped.
161 All tail pages have zero ->_refcount until atomic_add(). This prevents the
163 atomic_add() we don't care about the ->_refcount value. We already know how