Lines Matching +full:page +full:- +full:based

1 # SPDX-License-Identifier: GPL-2.0-only
33 compress them into a dynamically allocated RAM-based memory pool.
55 If exclusive loads are enabled, when a page is loaded from zswap,
59 This avoids having two copies of the same page in memory
60 (compressed and uncompressed) after faulting in a page from zswap.
61 The cost is that if the page was never dirtied and needs to be
62 swapped out again, it will be re-compressed.
74 available at the following LWN page:
177 page. While this design limits storage density, it has simple and
187 page. It is a ZBUD derivative so the simplicity and determinism are
195 zsmalloc is a slab-based memory allocator designed to store
210 int "Maximum number of physical pages per-zspage"
216 that a zmalloc page (zspage) can consist of. The optimal zspage
243 If you cannot migrate to SLUB, please contact linux-mm@kvack.org
312 sanity-checking than others. This option is most effective with
326 Try running: slabinfo -DA
345 normal kmalloc allocation and makes kmalloc randomly pick one based
359 bool "Page allocator randomization"
362 Randomization of the page allocator improves the average
363 utilization of a direct-mapped memory-side-cache. See section
366 the presence of a memory-side-cache. There are also incidental
367 security benefits as it reduces the predictability of page
370 order of pages is selected based on cache utilization benefits
376 after runtime detection of a direct-mapped memory-side-cache.
387 also breaks ancient binaries (including anything libc5 based).
392 On non-ancient distros (post-2000 ones) N is usually a safe choice.
407 ELF-FDPIC binfmt's brk and stack allocator.
411 userspace. Since that isn't generally a problem on no-MMU systems,
414 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
435 This option is best suited for non-NUMA systems with
451 memory hot-plug systems. This is normal.
455 hot-plug and hot-remove.
525 # Keep arch NUMA mapping infrastructure post-init.
571 See Documentation/admin-guide/mm/memory-hotplug.rst for more information.
573 Say Y here if you want all hot-plugged memory blocks to appear in
575 Say N here if you want the default policy to keep all hot-plugged
594 # Heavily threaded applications may benefit from splitting the mm-wide
598 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
599 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
600 # SPARC32 allocates multiple pte tables within a single page, and therefore
601 # a per-page lock leads to problems when multiple tables need to be locked
603 # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page.
646 reliably. The page allocator relies on compaction heavily and
651 linux-mm@kvack.org.
660 # support for free page reporting
662 bool "Free page reporting"
665 Free page reporting allows for the incremental acquisition of
671 # support for page migration
674 bool "Page migration"
682 pages as migration can relocate pages to satisfy a huge page
698 HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available
724 bool "Enable KSM for page merging"
731 the many instances by a single page with that content, so
784 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
793 long-term mappings means that the space is wasted.
803 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
820 applications by speeding up page faults during memory
857 XXX: For now, swap cluster backing transparent huge page
863 bool "Read-only THP for filesystems (EXPERIMENTAL)"
867 Allow khugepaged to put read-only file-backed pages in THP.
876 # UP and nommu archs use km based percpu allocator
902 subsystems to allocate big physically-contiguous blocks of memory.
950 soft-dirty bit on pte-s. This bit it set when someone writes
951 into a page just as regular dirty bit, but unlike the latter
954 See Documentation/admin-guide/mm/soft-dirty.rst for more details.
960 int "Default maximum user stack size for 32-bit processes (MB)"
965 This is the maximum stack size in Megabytes in the VM layout of 32-bit
990 This adds PG_idle and PG_young flags to 'struct page'. PTE Accessed
995 bool "Enable idle page tracking"
1004 See Documentation/admin-guide/mm/idle_page_tracking.rst for
1014 checking, an architecture-agnostic way to find the stack pointer
1046 "device-physical" addresses which is needed for using a DAX
1052 # Helpers to mirror range of the CPU page tables of a process into device page
1084 Enable the definition of PG_arch_x page flags with x > 1. Only
1085 suitable for 64-bit architectures with CONFIG_FLATMEM or
1087 enough room for additional bits in page->flags.
1095 on EXPERT systems. /proc/vmstat will only show page counts
1106 bool "Enable infrastructure for get_user_pages()-related unit tests"
1110 to make ioctl calls that can launch kernel-based unit tests for
1115 the non-_fast variants.
1117 There is also a sub-test that allows running dump_page() on any
1119 range of user-space addresses. These pages are either pinned via
1162 # struct io_mapping based helper. Selected by drivers that need them
1176 not mapped to other processes and other kernel page tables.
1197 handle page faults in userland.
1217 file-backed memory types like shmem and hugetlbfs.
1219 # multi-gen LRU {
1221 bool "Multi-Gen LRU"
1223 # make sure folio->flags has enough spare bits
1227 Documentation/admin-guide/mm/multigen_lru.rst for details.
1233 This option enables the multi-gen LRU by default.
1242 This option has a per-memcg and per-node memory overhead.
1252 Allow per-vma locking during page fault handling.
1255 handling page faults instead of taking mmap_lock.