Lines Matching +full:in +full:- +full:memory

11 For general info and legal blurb, please look in index.rst.
13 ------------------------------------------------------------------------------
15 This file contains the documentation for the sysctl files in
18 The files in this directory can be used to tune the operation
19 of the virtual memory (VM) subsystem of the Linux kernel and
23 files can be found in mm/swap.c.
25 Currently, these files are in /proc/sys/vm:
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
30 - compact_unevictable_allowed
31 - dirty_background_bytes
32 - dirty_background_ratio
33 - dirty_bytes
34 - dirty_expire_centisecs
35 - dirty_ratio
36 - dirtytime_expire_seconds
37 - dirty_writeback_centisecs
38 - drop_caches
39 - extfrag_threshold
40 - highmem_is_dirtyable
41 - hugetlb_shm_group
42 - laptop_mode
43 - legacy_va_layout
44 - lowmem_reserve_ratio
45 - max_map_count
46 - memory_failure_early_kill
47 - memory_failure_recovery
48 - min_free_kbytes
49 - min_slab_ratio
50 - min_unmapped_ratio
51 - mmap_min_addr
52 - mmap_rnd_bits
53 - mmap_rnd_compat_bits
54 - nr_hugepages
55 - nr_hugepages_mempolicy
56 - nr_overcommit_hugepages
57 - nr_trim_pages (only if CONFIG_MMU=n)
58 - numa_zonelist_order
59 - oom_dump_tasks
60 - oom_kill_allocating_task
61 - overcommit_kbytes
62 - overcommit_memory
63 - overcommit_ratio
64 - page-cluster
65 - page_lock_unfairness
66 - panic_on_oom
67 - percpu_pagelist_high_fraction
68 - stat_interval
69 - stat_refresh
70 - numa_stat
71 - swappiness
72 - unprivileged_userfaultfd
73 - user_reserve_kbytes
74 - vfs_cache_pressure
75 - watermark_boost_factor
76 - watermark_scale_factor
77 - zone_reclaim_mode
83 The amount of free memory in the system that should be reserved for users
88 That should provide enough for the admin to log in and kill a process,
92 for the full Virtual Memory Size of programs used to recover. Otherwise,
93 root may not be able to log in to recover the system.
106 Changing this takes effect whenever an application requests memory.
113 all zones are compacted such that free memory is available in contiguous
114 blocks where possible. This can be important for example in the allocation of
115 huge pages although processes will also directly compact memory as required.
120 This tunable takes a value in the range [0, 100] with a default value of
121 20. This tunable determines how aggressively compaction is done in the
125 Note that compaction has a non-trivial system-wide impact as pages
127 to latency spikes in unsuspecting applications. The kernel employs
140 acceptable trade for large contiguous free memory. Set to 0 to prevent
142 On CONFIG_PREEMPT_RT the default value is 0 in order to avoid a page fault, due
150 Contains the amount of dirty memory at which the background kernel
156 immediately taken into account to evaluate the dirty memory limits and the
163 Contains, as a percentage of total available memory that contains free pages
167 The total available memory is not equal to total system memory.
173 Contains the amount of dirty memory at which a process generating disk writes
178 account to evaluate the dirty memory limits and the other appears as 0 when
181 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
190 for writeout by the kernel flusher threads. It is expressed in 100'ths
191 of a second. Data which has been dirty in-memory for longer than this
198 Contains, as a percentage of total available memory that contains free pages
202 The total available memory is not equal to total system memory.
221 out to disk. This tunable expresses the interval between those wakeups, in
232 memory becomes free.
246 This is a non-destructive operation and will not free any dirty objects.
254 reclaimed by the kernel when memory is needed elsewhere on the system.
261 You may see informational messages in your kernel log when this file is
273 This parameter affects whether the kernel will compact memory or direct
274 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
275 debugfs shows what the fragmentation index for each order is in each zone in
277 of memory, values towards 1000 imply failures are due to fragmentation and -1
280 The kernel will not compact memory in a zone if the
289 This parameter controls whether the high memory is considered for dirty
291 only the amount of memory directly visible/usable by the kernel can
292 be dirtied. As a result, on systems with a large amount of memory and
296 Changing the value to non zero would allow more memory to be dirtied
298 storage more effectively. Note this also comes with a risk of pre-mature
300 only use the low memory and they can fill it up with dirty data without
308 shared memory segment using hugetlb page.
315 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
321 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
329 the kernel to allow process memory to be allocated from the "lowmem"
330 zone. This is because that memory could then be pinned via the mlock()
333 And on large highmem machines this lack of reclaimable lowmem memory
339 captured into pinned user memory.
346 in defending these lower zones.
359 in /proc/zoneinfo like followings. (This is an example of x86-64 box).
379 In this example, if normal pages (index=2) are required to this DMA zone and
389 zone[i]->protection[j]
409 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
416 This file contains the maximum number of memory map areas a process
417 may have. Memory map areas are used as a side-effect of calling
431 Control how to kill processes when uncorrected memory error (typically
432 a 2bit error in a memory module) is detected in the background by hardware
433 that cannot be handled by the kernel. In some cases (like the page
459 Enable memory failure recovery (when supported by the platform)
463 0: Always panic on a memory failure.
471 watermark[WMARK_MIN] value for each lowmem zone in the system.
475 Some minimal amount of memory is needed to satisfy PF_MEMALLOC
487 A percentage of the total pages in each zone. On Zone reclaim
489 than this percentage of pages in a zone are reclaimable slab pages.
490 This insures that the slab growth stays under control even in NUMA
495 Note that slab reclaim is triggered in a per zone / node fashion.
496 The process of reclaiming slab memory is currently not node specific
505 This is a percentage of the total pages in each zone. Zone reclaim will
506 only occur if more than this percentage of pages are in a state that
510 against all file-backed unmapped pages including swapcache pages and tmpfs
522 accidentally operate based on the information in the first couple of pages
523 of memory userspace processes should not be allowed to write to them. By
526 vast majority of applications to work correctly and provide defense in depth
548 resulting from mmap allocations for applications run in
562 See Documentation/admin-guide/mm/hugetlbpage.rst
569 in include/linux/mm_types.h) is not power of two (an unusual system config could
570 result in this).
584 benefits of memory savings against the more overhead (~2x slower than before)
586 allocator. Another behavior to note is that if the system is under heavy memory
596 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
598 in use. You would need to wait for those surplus pages to be released before
599 there are no optimized pages in the system.
605 Change the size of the hugepage pool at run-time on a specific
608 See Documentation/admin-guide/mm/hugetlbpage.rst
617 See Documentation/admin-guide/mm/hugetlbpage.rst
625 This value adjusts the excess page trimming behaviour of power-of-2 aligned
634 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
643 'where the memory is allocated from' is controlled by zonelists.
648 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
649 ZONE_NORMAL -> ZONE_DMA
650 This means that a memory allocation request for GFP_KERNEL will
651 get memory from ZONE_DMA only when ZONE_NORMAL is not available.
653 In NUMA case, you can think of following 2 types of order.
656 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
657 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
661 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
676 On 32-bit, the Normal zone needs to be preserved for allocations accessible
679 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
689 Enables a system-wide task dump (excluding kernel threads) to be produced
690 when the kernel performs an OOM-killing and includes such information as
698 the memory state information for each one. Such systems should not
699 be forced to incur a performance penalty in OOM conditions when the
702 If this is set to non-zero, this information is shown whenever the
703 OOM killer actually kills a memory-hogging task.
711 This enables or disables killing the OOM-triggering task in
712 out-of-memory situations.
716 selects a rogue memory-hogging task that frees up a large amount of
717 memory when killed.
719 If this is set to non-zero, the OOM killer simply kills the task that
720 triggered the out-of-memory condition. This avoids the expensive
724 is used in oom_kill_allocating_task.
743 This value contains a flag that enables memory overcommitment.
746 of free memory left when userspace requests more memory.
749 memory until it actually runs out.
752 policy that attempts to prevent any overcommit of memory.
756 programs that malloc() huge amounts of memory "just-in-case"
761 See Documentation/mm/overcommit-accounting.rst and
773 page-cluster
776 page-cluster controls the number of pages up to which consecutive pages
777 are read in from swap in a single attempt. This is the swap counterpart
779 The mentioned consecutivity is not in terms of virtual/physical addresses,
780 but consecutive on swap space - that means they were swapped out together.
782 It is a logarithmic value - setting it to zero means "1 page", setting
787 small benefits in tuning this to a different value if your workload is
788 swap-intensive.
792 that consecutive pages readahead would have brought in.
800 specified in this file (default is 5), the "fair lock handoff" semantics
806 This enables or disables panic on out-of-memory feature.
812 If this is set to 1, the kernel panics when out-of-memory happens.
814 and those nodes become memory exhaustion status, one process
815 may be killed by oom-killer. No panic occurs in this case.
816 Because other nodes' memory may be free. This means system total status
820 above-mentioned. Even oom happens under memory cgroup, the whole
835 This is the fraction of pages in each zone that are can be stored to
836 per-cpu page lists. It is an upper boundary that is divided depending
838 that we do not allow more than 1/8th of pages in each zone to be stored
839 on per-cpu page lists. This entry only changes the value of hot per-cpu
841 each zone between per-cpu lists.
843 The batch value of each per-cpu page list remains the same regardless of
846 The initial value is zero. Kernel uses this value to set the high pcp->high
862 Any read or write (by root only) flushes all the per-cpu vm statistics
866 As a side-effect, it also checks for negative totals (elsewhere reported
867 as 0) and "fails" with EINVAL if any are found, with a warning in dmesg.
894 assumes equal IO cost and will thus apply memory pressure to the page
895 cache and swap-backed pages equally; lower values signify more
898 Keep in mind that filesystem IO patterns under memory pressure tend to
900 experimentation and will also be workload-dependent.
904 For in-memory swap, like zram or zswap, as well as hybrid setups that
911 file-backed pages is less than the high watermark in a zone.
917 This flag controls the mode in which unprivileged users can use the
919 to handle page faults in user mode only. In this case, users without
920 SYS_CAP_PTRACE must pass UFFD_USER_MODE_ONLY in order for userfaultfd to
931 Documentation/admin-guide/mm/userfaultfd.rst.
937 min(3% of current process size, user_reserve_kbytes) of free memory.
938 This is intended to prevent a user from starting a single memory hogging
944 all free memory with a single process, minus admin_reserve_kbytes.
945 Any subsequent attempts to execute a command will result in
946 "fork: Cannot allocate memory".
948 Changing this takes effect whenever an application requests memory.
955 the memory which is used for caching of directory and inode objects.
961 never reclaim dentries and inodes due to memory pressure and this can easily
962 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
974 This factor controls the level of reclaim when memory is being fragmented.
977 The intent is that compaction has less work to do in the future and to
978 increase the success rate of future high-order allocations such as SLUB
982 parameter, the unit is in fractions of 10,000. The default value of
983 15,000 means that up to 150% of the high watermark will be reclaimed in the
985 is determined by the number of fragmentation events that occurred in the
987 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
995 amount of memory left in a node/system before kswapd is woken up and
996 how much memory needs to be free before kswapd goes back to sleep.
998 The unit is in fractions of 10,000. The default value of 10 means the
999 distances between watermarks are 0.1% of the available memory in the
1000 node/system. The maximum value is 3000, or 30% of memory.
1005 too small for the allocation bursts occurring in the system. This knob
1013 reclaim memory when a zone runs out of memory. If it is set to zero then no
1015 in the system.
1032 and that accessing remote memory would cause a measurable performance
1040 since it cannot use all of system memory to buffer the outgoing writes
1041 anymore but it preserve the memory on other nodes so that the performance
1045 node unless explicitly overridden by memory policies or cpuset