Lines Matching +full:high +full:- +full:performance

13 ------------------------------------------------------------------------------
27 - admin_reserve_kbytes
28 - compact_memory
29 - compaction_proactiveness
30 - compact_unevictable_allowed
31 - dirty_background_bytes
32 - dirty_background_ratio
33 - dirty_bytes
34 - dirty_expire_centisecs
35 - dirty_ratio
36 - dirtytime_expire_seconds
37 - dirty_writeback_centisecs
38 - drop_caches
39 - extfrag_threshold
40 - highmem_is_dirtyable
41 - hugetlb_shm_group
42 - laptop_mode
43 - legacy_va_layout
44 - lowmem_reserve_ratio
45 - max_map_count
46 - memory_failure_early_kill
47 - memory_failure_recovery
48 - min_free_kbytes
49 - min_slab_ratio
50 - min_unmapped_ratio
51 - mmap_min_addr
52 - mmap_rnd_bits
53 - mmap_rnd_compat_bits
54 - nr_hugepages
55 - nr_hugepages_mempolicy
56 - nr_overcommit_hugepages
57 - nr_trim_pages (only if CONFIG_MMU=n)
58 - numa_zonelist_order
59 - oom_dump_tasks
60 - oom_kill_allocating_task
61 - overcommit_kbytes
62 - overcommit_memory
63 - overcommit_ratio
64 - page-cluster
65 - panic_on_oom
66 - percpu_pagelist_high_fraction
67 - stat_interval
68 - stat_refresh
69 - numa_stat
70 - swappiness
71 - unprivileged_userfaultfd
72 - user_reserve_kbytes
73 - vfs_cache_pressure
74 - watermark_boost_factor
75 - watermark_scale_factor
76 - zone_reclaim_mode
124 Note that compaction has a non-trivial system-wide impact as pages
190 of a second. Data which has been dirty in-memory for longer than this
245 This is a non-destructive operation and will not free any dirty objects.
255 Use of this file can cause performance problems. Since it discards cached
273 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
276 of memory, values towards 1000 imply failures are due to fragmentation and -1
288 This parameter controls whether the high memory is considered for dirty
297 storage more effectively. Note this also comes with a risk of pre-mature
314 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
320 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
358 in /proc/zoneinfo like followings. (This is an example of x86-64 box).
365 high 4
388 zone[i]->protection[j]
408 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
416 may have. Memory map areas are used as a side-effect of calling
476 become subtly broken, and prone to deadlock under high loads.
478 Setting this too high will OOM your machine instantly.
509 against all file-backed unmapped pages including swapcache pages and tmpfs
561 See Documentation/admin-guide/mm/hugetlbpage.rst
567 Change the size of the hugepage pool at run-time on a specific
570 See Documentation/admin-guide/mm/hugetlbpage.rst
579 See Documentation/admin-guide/mm/hugetlbpage.rst
587 This value adjusts the excess page trimming behaviour of power-of-2 aligned
596 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
610 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
611 ZONE_NORMAL -> ZONE_DMA
618 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
619 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
623 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
638 On 32-bit, the Normal zone needs to be preserved for allocations accessible
641 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
651 Enables a system-wide task dump (excluding kernel threads) to be produced
652 when the kernel performs an OOM-killing and includes such information as
661 be forced to incur a performance penalty in OOM conditions when the
664 If this is set to non-zero, this information is shown whenever the
665 OOM killer actually kills a memory-hogging task.
673 This enables or disables killing the OOM-triggering task in
674 out-of-memory situations.
678 selects a rogue memory-hogging task that frees up a large amount of
681 If this is set to non-zero, the OOM killer simply kills the task that
682 triggered the out-of-memory condition. This avoids the expensive
718 programs that malloc() huge amounts of memory "just-in-case"
723 See Documentation/vm/overcommit-accounting.rst and
735 page-cluster
738 page-cluster controls the number of pages up to which consecutive pages
742 but consecutive on swap space - that means they were swapped out together.
744 It is a logarithmic value - setting it to zero means "1 page", setting
750 swap-intensive.
760 This enables or disables panic on out-of-memory feature.
766 If this is set to 1, the kernel panics when out-of-memory happens.
769 may be killed by oom-killer. No panic occurs in this case.
774 above-mentioned. Even oom happens under memory cgroup, the whole
790 per-cpu page lists. It is an upper boundary that is divided depending
793 on per-cpu page lists. This entry only changes the value of hot per-cpu
795 each zone between per-cpu lists.
797 The batch value of each per-cpu page list remains the same regardless of
798 the value of the high fraction so allocation latencies are unaffected.
800 The initial value is zero. Kernel uses this value to set the high pcp->high
816 Any read or write (by root only) flushes all the per-cpu vm statistics
820 As a side-effect, it also checks for negative totals (elsewhere reported
831 When page allocation performance becomes a bottleneck and you can tolerate
837 When page allocation performance is not a bottleneck and you want all
849 cache and swap-backed pages equally; lower values signify more
854 experimentation and will also be workload-dependent.
858 For in-memory swap, like zram or zswap, as well as hybrid setups that
865 file-backed pages is less than the high watermark in a zone.
913 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
917 performance impact. Reclaim code needs to take various locks to find freeable
926 It defines the percentage of the high watermark of a zone that will be
929 increase the success rate of future high-order allocations such as SLUB
934 15,000 means that up to 150% of the high watermark will be reclaimed in the
938 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
953 A high rate of threads entering direct reclaim (allocstall) or kswapd
983 and that accessing remote memory would cause a measurable performance
990 throttle the process. This may decrease the performance of a single process
992 anymore but it preserve the memory on other nodes so that the performance