Lines Matching +full:locality +full:- +full:specific
13 ------------------------------------------------------------------------------
27 - admin_reserve_kbytes
28 - block_dump
29 - compact_memory
30 - compaction_proactiveness
31 - compact_unevictable_allowed
32 - dirty_background_bytes
33 - dirty_background_ratio
34 - dirty_bytes
35 - dirty_expire_centisecs
36 - dirty_ratio
37 - dirtytime_expire_seconds
38 - dirty_writeback_centisecs
39 - drop_caches
40 - extfrag_threshold
41 - highmem_is_dirtyable
42 - hugetlb_shm_group
43 - laptop_mode
44 - legacy_va_layout
45 - lowmem_reserve_ratio
46 - max_map_count
47 - memory_failure_early_kill
48 - memory_failure_recovery
49 - min_free_kbytes
50 - min_slab_ratio
51 - min_unmapped_ratio
52 - mmap_min_addr
53 - mmap_rnd_bits
54 - mmap_rnd_compat_bits
55 - nr_hugepages
56 - nr_hugepages_mempolicy
57 - nr_overcommit_hugepages
58 - nr_trim_pages (only if CONFIG_MMU=n)
59 - numa_zonelist_order
60 - oom_dump_tasks
61 - oom_kill_allocating_task
62 - overcommit_kbytes
63 - overcommit_memory
64 - overcommit_ratio
65 - page-cluster
66 - panic_on_oom
67 - percpu_pagelist_fraction
68 - stat_interval
69 - stat_refresh
70 - numa_stat
71 - swappiness
72 - unprivileged_userfaultfd
73 - user_reserve_kbytes
74 - vfs_cache_pressure
75 - watermark_boost_factor
76 - watermark_scale_factor
77 - zone_reclaim_mode
113 information on block I/O debugging is in Documentation/admin-guide/laptops/laptop-mode.rst.
131 Note that compaction has a non-trivial system-wide impact as pages
197 of a second. Data which has been dirty in-memory for longer than this
252 This is a non-destructive operation and will not free any dirty objects.
280 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
283 of memory, values towards 1000 imply failures are due to fragmentation and -1
304 storage more effectively. Note this also comes with a risk of pre-mature
321 controlled by this knob are discussed in Documentation/admin-guide/laptops/laptop-mode.rst.
327 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
365 in /proc/zoneinfo like followings. (This is an example of x86-64 box).
395 zone[i]->protection[j]
415 The minimum value is 1 (1/1 -> 100%). The value less than 1 completely
423 may have. Memory map areas are used as a side-effect of calling
502 The process of reclaiming slab memory is currently not node specific
516 against all file-backed unmapped pages including swapcache pages and tmpfs
568 See Documentation/admin-guide/mm/hugetlbpage.rst
574 Change the size of the hugepage pool at run-time on a specific
577 See Documentation/admin-guide/mm/hugetlbpage.rst
586 See Documentation/admin-guide/mm/hugetlbpage.rst
594 This value adjusts the excess page trimming behaviour of power-of-2 aligned
603 See Documentation/admin-guide/mm/nommu-mmap.rst for more information.
617 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
618 ZONE_NORMAL -> ZONE_DMA
625 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
626 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
628 Type(A) offers the best locality for processes on Node(0), but ZONE_DMA
630 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
632 Type(B) cannot offer the best locality but is more robust against OOM of
645 On 32-bit, the Normal zone needs to be preserved for allocations accessible
648 On 64-bit, devices that require DMA32/DMA are relatively rare, so "node"
658 Enables a system-wide task dump (excluding kernel threads) to be produced
659 when the kernel performs an OOM-killing and includes such information as
671 If this is set to non-zero, this information is shown whenever the
672 OOM killer actually kills a memory-hogging task.
680 This enables or disables killing the OOM-triggering task in
681 out-of-memory situations.
685 selects a rogue memory-hogging task that frees up a large amount of
688 If this is set to non-zero, the OOM killer simply kills the task that
689 triggered the out-of-memory condition. This avoids the expensive
725 programs that malloc() huge amounts of memory "just-in-case"
730 See Documentation/vm/overcommit-accounting.rst and
742 page-cluster
745 page-cluster controls the number of pages up to which consecutive pages
749 but consecutive on swap space - that means they were swapped out together.
751 It is a logarithmic value - setting it to zero means "1 page", setting
757 swap-intensive.
767 This enables or disables panic on out-of-memory feature.
773 If this is set to 1, the kernel panics when out-of-memory happens.
776 may be killed by oom-killer. No panic occurs in this case.
781 above-mentioned. Even oom happens under memory cgroup, the whole
796 This is the fraction of pages at most (high mark pcp->high) in each zone that
804 set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
821 Any read or write (by root only) flushes all the per-cpu vm statistics
825 As a side-effect, it also checks for negative totals (elsewhere reported
854 cache and swap-backed pages equally; lower values signify more
859 experimentation and will also be workload-dependent.
863 For in-memory swap, like zram or zswap, as well as hybrid setups that
870 file-backed pages is less than the high watermark in a zone.
913 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
929 increase the success rate of future high-order allocations such as SLUB
939 (e.g. 2MB on 64-bit x86). A boost factor of 0 will disable the feature.
979 data locality.