/Linux-v5.4/Documentation/admin-guide/mm/ |
D | hugetlbpage.rst | 21 Users can use the huge page support in Linux kernel by either using the mmap 30 persistent hugetlb pages in the kernel's huge page pool. It also displays 31 default huge page size and information about the number of free, reserved 32 and surplus huge pages in the pool of huge pages of default size. 33 The huge page size is needed for generating the proper alignment and 34 size of the arguments to system calls that map huge page regions. 48 is the size of the pool of huge pages. 50 is the number of huge pages in the pool that are not yet 53 is short for "reserved," and is the number of huge pages for 55 but no allocation has yet been made. Reserved huge pages [all …]
|
D | transhuge.rst | 13 using huge pages for the backing of virtual memory with huge pages 22 the huge page size is 2M, although the actual numbers may vary 53 collapses sequences of basic pages into huge pages. 151 By default kernel tries to use huge zero page on read page fault to 152 anonymous mapping. It's possible to disable huge zero page by writing 0 214 swap when collapsing a group of pages into a transparent huge page:: 235 ``huge=``. It can have following values: 238 Attempt to allocate huge pages every time we need a new page; 241 Do not allocate huge pages; 244 Only allocate huge page if it will be fully within i_size. [all …]
|
D | concepts.rst | 81 `huge`. Usage of huge pages significantly reduces pressure on TLB, 85 memory with the huge pages. The first one is `HugeTLB filesystem`, or 88 the memory and mapped using huge pages. The hugetlbfs is described at 91 Another, more recent, mechanism that enables use of the huge pages is 94 the system memory should and can be mapped by the huge pages, THP 204 buffer for DMA, or when THP allocates a huge page. Memory `compaction`
|
D | idle_page_tracking.rst | 44 For huge pages the idle flag is set only on the head page, so one has to read 45 ``/proc/kpageflags`` in order to correctly count idle huge pages.
|
/Linux-v5.4/Documentation/vm/ |
D | hugetlbfs_reserv.rst | 11 preallocated for application use. These huge pages are instantiated in a 12 task's address space at page fault time if the VMA indicates huge pages are 13 to be used. If no huge page exists at page fault time, the task is sent 14 a SIGBUS and often dies an unhappy death. Shortly after huge page support 16 of huge pages at mmap() time. The idea is that if there were not enough 17 huge pages to cover the mapping, the mmap() would fail. This was first 19 were enough free huge pages to cover the mapping. Like most things in the 21 'reserve' huge pages at mmap() time to ensure that huge pages would be 23 describe how huge page reserve processing is done in the v4.10 kernel. 36 This is a global (per-hstate) count of reserved huge pages. Reserved [all …]
|
D | transhuge.rst | 15 knowledge fall back to breaking huge pmd mapping into table of ptes and, 43 is complete, so they won't ever notice the fact the page is huge. But 64 Code walking pagetables but unaware about huge pmds can simply call 99 To make pagetable walks huge pmd aware, all you need to do is to call 101 mmap_sem in read (or write) mode to be sure a huge pmd cannot be 107 page table lock will prevent the huge pmd being converted into a 111 before. Otherwise, you can proceed to process the huge pmd and the 114 Refcounts and transparent huge pages 129 (stored in first tail page). For file huge pages, we also increment 156 requests to split pinned huge pages: it expects page count to be equal to
|
/Linux-v5.4/drivers/gpu/drm/ttm/ |
D | ttm_page_alloc.c | 221 static struct ttm_page_pool *ttm_get_pool(int flags, bool huge, in ttm_get_pool() argument 235 if (huge) in ttm_get_pool() 239 } else if (huge) { in ttm_get_pool() 713 struct ttm_page_pool *huge = ttm_get_pool(flags, true, cstate); in ttm_put_pages() local 759 if (huge) { in ttm_put_pages() 762 spin_lock_irqsave(&huge->lock, irq_flags); in ttm_put_pages() 777 list_add_tail(&pages[i]->lru, &huge->list); in ttm_put_pages() 781 huge->npages++; in ttm_put_pages() 787 if (huge->npages > max_size) in ttm_put_pages() 788 n2free = huge->npages - max_size; in ttm_put_pages() [all …]
|
/Linux-v5.4/arch/powerpc/include/asm/book3s/64/ |
D | hash.h | 147 pte_t *ptep, unsigned long pte, int huge); 154 int huge) in hash__pte_update() argument 172 if (!huge) in hash__pte_update() 177 hpte_need_flush(mm, addr, ptep, old, huge); in hash__pte_update()
|
D | radix.h | 154 int huge) in radix__pte_update() argument 159 if (!huge) in radix__pte_update()
|
/Linux-v5.4/Documentation/admin-guide/hw-vuln/ |
D | multihit.rst | 81 * - KVM: Mitigation: Split huge pages 107 In order to mitigate the vulnerability, KVM initially marks all huge pages 121 The KVM hypervisor mitigation mechanism for marking huge pages as 130 non-executable huge pages in Linux kernel KVM module. All huge
|
/Linux-v5.4/arch/alpha/lib/ |
D | ev6-clear_user.S | 86 subq $1, 16, $4 # .. .. .. E : If < 16, we can not use the huge loop 87 and $16, 0x3f, $2 # .. .. E .. : Forward work for huge loop 88 subq $2, 0x40, $3 # .. E .. .. : bias counter (huge loop)
|
/Linux-v5.4/mm/ |
D | shmem.c | 118 int huge; member 438 static const char *shmem_format_huge(int huge) in shmem_format_huge() argument 440 switch (huge) { in shmem_format_huge() 597 (shmem_huge == SHMEM_HUGE_FORCE || sbinfo->huge) && in is_huge_enabled() 1506 pgoff_t index, bool huge) in shmem_alloc_and_acct_page() argument 1514 huge = false; in shmem_alloc_and_acct_page() 1515 nr = huge ? HPAGE_PMD_NR : 1; in shmem_alloc_and_acct_page() 1520 if (huge) in shmem_alloc_and_acct_page() 1814 switch (sbinfo->huge) { in shmem_getpage_gfp() 2128 if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) in shmem_get_unmapped_area() [all …]
|
D | Kconfig | 217 with the reduced number of transparent huge pages that could be used 250 to the processors accessing. The second is when allocating huge 251 pages as migration can relocate pages to satisfy a huge page 376 Transparent Hugepages allows the kernel to use huge pages and 377 huge tlb transparently to the applications whenever possible. 416 Swap transparent huge pages in one piece, without splitting. 417 XXX: For now, swap cluster backing transparent huge page
|
/Linux-v5.4/arch/powerpc/mm/book3s64/ |
D | hash_tlb.c | 42 pte_t *ptep, unsigned long pte, int huge) in hpte_need_flush() argument 62 if (huge) { in hpte_need_flush()
|
/Linux-v5.4/Documentation/features/vm/huge-vmap/ |
D | arch-support.txt | 2 # Feature name: huge-vmap
|
/Linux-v5.4/arch/parisc/mm/ |
D | init.c | 434 bool huge = false; in map_pages() local 444 huge = true; in map_pages() 449 huge = true; in map_pages() 455 if (huge) in map_pages()
|
/Linux-v5.4/arch/powerpc/include/asm/nohash/64/ |
D | pgtable.h | 212 int huge) in pte_update() argument 231 if (!huge) in pte_update()
|
/Linux-v5.4/tools/testing/selftests/vm/ |
D | run_vmtests | 8 mnt=./huge
|
/Linux-v5.4/arch/arc/plat-eznps/ |
D | entry.S | 33 ; FMT are huge pages for user application reside at 0-2G.
|
/Linux-v5.4/include/linux/ |
D | shmem_fs.h | 35 unsigned char huge; /* Whether to try for hugepages */ member
|
/Linux-v5.4/Documentation/filesystems/ext4/ |
D | bigalloc.rst | 9 exceeds the page size. However, for a filesystem of mostly huge files,
|
/Linux-v5.4/Documentation/x86/x86_64/ |
D | mm.rst | 35 …0000800000000000 | +128 TB | ffff7fffffffffff | ~16M TB | ... huge, almost 64 bits wide hole of… 94 …0100000000000000 | +64 PB | feffffffffffffff | ~16K PB | ... huge, still almost 64 bits wide h…
|
/Linux-v5.4/Documentation/admin-guide/blockdev/ |
D | zram.rst | 132 size of the disk when not in use so a huge zram is wasteful. 321 echo huge > /sys/block/zramX/write 412 huge page 417 and the block's state is huge so it is written back to the backing
|
/Linux-v5.4/Documentation/usb/ |
D | mtouchusb.rst | 83 A huge thank you to 3M Touch Systems for the EXII-5010UC controllers for
|
/Linux-v5.4/Documentation/scsi/ |
D | scsi-changer.txt | 41 None of these is limited to one: A huge Jukebox could have slots for 69 works fine with small (11 slots) and a huge (4 MOs, 88 slots)
|