Lines Matching full:pages

3  * Free some vmemmap pages of HugeTLB
13 * HugeTLB pages consist of multiple base page size pages and is supported by
15 * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB
17 * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
18 * 4096 base pages. For each base page, there is a corresponding page struct.
23 * is the compound_head field, and this field is the same for all tail pages.
25 * By removing redundant page structs for HugeTLB pages, memory can be returned
28 * Different architectures support different HugeTLB pages. For example, the
30 * architectures. Because arm64 supports 4k, 16k, and 64k base pages and
47 * structs which size is (unit: pages):
70 * = 8 (pages)
85 * = PAGE_SIZE / 8 * 8 (pages)
86 * = PAGE_SIZE (pages)
95 * show the internal implementation of this optimization. There are 8 pages
100 * HugeTLB struct pages(8 pages) page frame(8 pages)
123 * The value of page->compound_head is the same for all tail pages. The first
126 * pages of page structs (page 1 to page 7) is to point to page->compound_head.
127 * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
129 * 6 pages to the buddy allocator.
133 * HugeTLB struct pages(8 pages) page frame(8 pages)
156 * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
157 * vmemmap pages and restore the previous mapping relationship.
160 * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages.
169 * size of the struct page structs is greater than 2 pages.
177 * For tail pages, the value of compound_head is the same. So we can reuse first
179 * pages of tail page structures to the first tail page struct, and then free
180 * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
191 pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); in early_hugetlb_free_vmemmap_param()
215 * Previously discarded vmemmap pages will be allocated and remapping
231 * The pages which the vmemmap virtual address range [@vmemmap_addr, in alloc_huge_page_vmemmap()
235 * discarded vmemmap pages must be allocated and remapping. in alloc_huge_page_vmemmap()
260 * to the page which @vmemmap_reuse is mapped to, then free the pages in free_huge_page_vmemmap()
286 * allocator, the other pages will map to the first tail page, so they in hugetlb_vmemmap_init()
296 pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, in hugetlb_vmemmap_init()