/Linux-v6.1/Documentation/admin-guide/mm/ |
D | hugetlbpage.rst | 21 Users can use the huge page support in Linux kernel by either using the mmap 30 persistent hugetlb pages in the kernel's huge page pool. It also displays 31 default huge page size and information about the number of free, reserved 32 and surplus huge pages in the pool of huge pages of default size. 33 The huge page size is needed for generating the proper alignment and 34 size of the arguments to system calls that map huge page regions. 48 is the size of the pool of huge pages. 50 is the number of huge pages in the pool that are not yet 53 is short for "reserved," and is the number of huge pages for 55 but no allocation has yet been made. Reserved huge pages [all …]
|
D | transhuge.rst | 13 using huge pages for the backing of virtual memory with huge pages 22 the huge page size is 2M, although the actual numbers may vary 53 collapses sequences of basic pages into huge pages. 151 By default kernel tries to use huge zero page on read page fault to 152 anonymous mapping. It's possible to disable huge zero page by writing 0 221 swap when collapsing a group of pages into a transparent huge page:: 249 ``huge=``. It can have following values: 252 Attempt to allocate huge pages every time we need a new page; 255 Do not allocate huge pages; 258 Only allocate huge page if it will be fully within i_size. [all …]
|
D | concepts.rst | 81 `huge`. Usage of huge pages significantly reduces pressure on TLB, 85 memory with the huge pages. The first one is `HugeTLB filesystem`, or 88 the memory and mapped using huge pages. The hugetlbfs is described at 91 Another, more recent, mechanism that enables use of the huge pages is 94 the system memory should and can be mapped by the huge pages, THP 204 buffer for DMA, or when THP allocates a huge page. Memory `compaction`
|
/Linux-v6.1/tools/testing/selftests/vm/ |
D | charge_reserved_hugetlb.sh | 52 if [[ -e /mnt/huge ]]; then 53 rm -rf /mnt/huge/* 54 umount /mnt/huge || echo error 55 rmdir /mnt/huge 260 if [[ -e /mnt/huge ]]; then 261 rm -rf /mnt/huge/* 262 umount /mnt/huge 263 rmdir /mnt/huge 290 mkdir -p /mnt/huge 291 mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge [all …]
|
/Linux-v6.1/Documentation/mm/ |
D | hugetlbfs_reserv.rst | 11 preallocated for application use. These huge pages are instantiated in a 12 task's address space at page fault time if the VMA indicates huge pages are 13 to be used. If no huge page exists at page fault time, the task is sent 14 a SIGBUS and often dies an unhappy death. Shortly after huge page support 16 of huge pages at mmap() time. The idea is that if there were not enough 17 huge pages to cover the mapping, the mmap() would fail. This was first 19 were enough free huge pages to cover the mapping. Like most things in the 21 'reserve' huge pages at mmap() time to ensure that huge pages would be 23 describe how huge page reserve processing is done in the v4.10 kernel. 36 This is a global (per-hstate) count of reserved huge pages. Reserved [all …]
|
D | transhuge.rst | 15 knowledge fall back to breaking huge pmd mapping into table of ptes and, 43 is complete, so they won't ever notice the fact the page is huge. But 59 Code walking pagetables but unaware about huge pmds can simply call 94 To make pagetable walks huge pmd aware, all you need to do is to call 96 mmap_lock in read (or write) mode to be sure a huge pmd cannot be 102 page table lock will prevent the huge pmd being converted into a 106 before. Otherwise, you can proceed to process the huge pmd and the 109 Refcounts and transparent huge pages 124 (stored in first tail page). For file huge pages, we also increment 151 requests to split pinned huge pages: it expects page count to be equal to
|
D | arch_pgtable_helpers.rst | 141 | pmd_set_huge | Creates a PMD huge mapping | 143 | pmd_clear_huge | Clears a PMD huge mapping | 197 | pud_set_huge | Creates a PUD huge mapping | 199 | pud_clear_huge | Clears a PUD huge mapping |
|
D | unevictable-lru.rst | 312 (unless it is a PTE mapping of a part of a transparent huge page). Or when 347 hugetlbfs ranges, allocating the huge pages and populating the PTEs. 433 A transparent huge page is represented by a single entry on an LRU list. 437 If a user tries to mlock() part of a huge page, and no user mlock()s the 438 whole of the huge page, we want the rest of the page to be reclaimable. 443 We handle this by keeping PTE-mlocked huge pages on evictable LRU lists: 446 This way the huge page is accessible for vmscan. Under memory pressure the 451 of a transparent huge page which are mapped only by PTEs in VM_LOCKED VMAs. 487 (unless it was a PTE mapping of a part of a transparent huge page). 511 (unless it was a PTE mapping of a part of a transparent huge page).
|
/Linux-v6.1/arch/powerpc/include/asm/nohash/32/ |
D | pgtable.h | 236 static int number_of_cells_per_pte(pmd_t *pmd, pte_basic_t val, int huge) in number_of_cells_per_pte() argument 238 if (!huge) in number_of_cells_per_pte() 249 unsigned long clr, unsigned long set, int huge) in pte_update() argument 257 num = number_of_cells_per_pte(pmd, new, huge); in pte_update() 278 unsigned long clr, unsigned long set, int huge) in pte_update() argument 328 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local 330 pte_update(vma->vm_mm, address, ptep, 0, set, huge); in __ptep_set_access_flags()
|
D | pte-8xx.h | 140 unsigned long clr, unsigned long set, int huge); 153 int huge = psize > mmu_virtual_psize ? 1 : 0; in __ptep_set_access_flags() local 155 pte_update(vma->vm_mm, address, ptep, clr, set, huge); in __ptep_set_access_flags()
|
/Linux-v6.1/arch/powerpc/include/asm/book3s/64/ |
D | hash.h | 147 pte_t *ptep, unsigned long pte, int huge); 154 int huge) in hash__pte_update() argument 172 if (!huge) in hash__pte_update() 177 hpte_need_flush(mm, addr, ptep, old, huge); in hash__pte_update()
|
D | radix.h | 176 int huge) in radix__pte_update() argument 181 if (!huge) in radix__pte_update()
|
/Linux-v6.1/Documentation/core-api/ |
D | pin_user_pages.rst | 65 huge pages, because each tail page adds a refcount to the head page. And in 67 page overflows were seen in some huge page stress tests. 69 This also means that huge pages and compound pages do not suffer 241 acquired since the system was powered on. For huge pages, the head page is 242 pinned once for each page (head page and each tail page) within the huge page. 243 This follows the same sort of behavior that get_user_pages() uses for huge 244 pages: the head page is refcounted once for each tail or head page in the huge 245 page, when get_user_pages() is applied to a huge page. 249 PAGE_SIZE granularity, even if the original pin was applied to a huge page.
|
/Linux-v6.1/Documentation/admin-guide/hw-vuln/ |
D | multihit.rst | 81 * - KVM: Mitigation: Split huge pages 111 In order to mitigate the vulnerability, KVM initially marks all huge pages 125 The KVM hypervisor mitigation mechanism for marking huge pages as 134 non-executable huge pages in Linux kernel KVM module. All huge
|
/Linux-v6.1/arch/alpha/lib/ |
D | ev6-clear_user.S | 86 subq $1, 16, $4 # .. .. .. E : If < 16, we can not use the huge loop 87 and $16, 0x3f, $2 # .. .. E .. : Forward work for huge loop 88 subq $2, 0x40, $3 # .. E .. .. : bias counter (huge loop)
|
/Linux-v6.1/arch/powerpc/mm/book3s64/ |
D | hash_tlb.c | 41 pte_t *ptep, unsigned long pte, int huge) in hpte_need_flush() argument 61 if (huge) { in hpte_need_flush()
|
/Linux-v6.1/drivers/misc/lkdtm/ |
D | bugs.c | 276 volatile unsigned int huge = INT_MAX - 2; variable 283 value = huge; in lkdtm_OVERFLOW_SIGNED() 298 value = huge; in lkdtm_OVERFLOW_UNSIGNED()
|
/Linux-v6.1/mm/ |
D | shmem.c | 117 int huge; member 482 switch (SHMEM_SB(inode->i_sb)->huge) { in shmem_is_huge() 520 static const char *shmem_format_huge(int huge) in shmem_format_huge() argument 522 switch (huge) { in shmem_format_huge() 1579 pgoff_t index, bool huge) in shmem_alloc_and_acct_folio() argument 1587 huge = false; in shmem_alloc_and_acct_folio() 1588 nr = huge ? HPAGE_PMD_NR : 1; in shmem_alloc_and_acct_folio() 1593 if (huge) in shmem_alloc_and_acct_folio() 2205 if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) in shmem_get_unmapped_area() 3533 ctx->huge = result.uint_32; in shmem_parse_one() [all …]
|
/Linux-v6.1/Documentation/riscv/ |
D | vm-layout.rst | 42 …0000004000000000 | +256 GB | ffffffbfffffffff | ~16M TB | ... huge, almost 64 bits wide hole of… 78 …0000800000000000 | +128 TB | ffff7fffffffffff | ~16M TB | ... huge, almost 64 bits wide hole of…
|
/Linux-v6.1/Documentation/features/vm/huge-vmap/ |
D | arch-support.txt | 2 # Feature name: huge-vmap
|
/Linux-v6.1/arch/parisc/mm/ |
D | init.c | 398 bool huge = false; in map_pages() local 408 huge = true; in map_pages() 413 huge = true; in map_pages() 419 if (huge) in map_pages()
|
/Linux-v6.1/arch/powerpc/include/asm/nohash/64/ |
D | pgtable.h | 178 int huge) in pte_update() argument 184 if (!huge) in pte_update()
|
/Linux-v6.1/fs/netfs/ |
D | Kconfig | 8 segmentation, local caching and transparent huge page support.
|
/Linux-v6.1/Documentation/mm/damon/ |
D | design.rst | 49 Only small parts in the super-huge virtual address space of the processes are 54 cases. That said, too huge unmapped areas inside the monitoring target should 63 exceptionally huge in usual address spaces, excluding these will be sufficient
|
/Linux-v6.1/lib/ |
D | test_maple_tree.c | 262 unsigned long huge = 4000UL * 1000 * 1000; in check_lb_not_empty() local 265 i = huge; in check_lb_not_empty() 268 for (j = huge; j >= i; j /= 2) { in check_lb_not_empty() 287 unsigned long huge; in check_upper_bound_split() local 292 huge = 2147483647UL; in check_upper_bound_split() 294 huge = 4000UL * 1000 * 1000; in check_upper_bound_split() 297 while (i < huge) { in check_upper_bound_split() 299 for (j = i; j >= huge; j *= 2) { in check_upper_bound_split() 311 unsigned long huge = 8000UL * 1000 * 1000; in check_mid_split() local 313 check_insert(mt, huge, (void *) huge); in check_mid_split()
|