Lines Matching full:pages

86 admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
115 huge pages although processes will also directly compact memory as required.
125 Note that compaction has a non-trivial system-wide impact as pages
138 allowed to examine the unevictable lru (mlocked pages) for pages to compact.
141 compaction from moving pages that are unevictable. Default value is 1.
163 Contains, as a percentage of total available memory that contains free pages
164 and reclaimable pages, the number of pages at which the background kernel
181 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
198 Contains, as a percentage of total available memory that contains free pages
199 and reclaimable pages, the number of pages at which a process which is
208 When a lazytime inode is constantly having its pages dirtied, the inode with
358 pages for each zones from them. These are shown as array of protection pages
360 Each zone has an array of protection pages like this::
363 pages free 1355
379 In this example, if normal pages (index=2) are required to this DMA zone and
405 256 means 1/256. # of protection pages becomes about "0.39%" of total managed
406 pages of higher zones on the node.
408 If you would like to protect more pages, smaller values are effective.
410 disables protection of the pages.
441 for a few types of pages, like kernel internally allocated data or
442 the swap cache, but works for the majority of user pages.
472 Each lowmem zone gets a number of reserved free pages based
487 A percentage of the total pages in each zone. On Zone reclaim
489 than this percentage of pages in a zone are reclaimable slab pages.
505 This is a percentage of the total pages in each zone. Zone reclaim will
506 only occur if more than this percentage of pages are in a state that
510 against all file-backed unmapped pages including swapcache pages and tmpfs
511 files. Otherwise, only unmapped pages backed by normal files but not tmpfs
522 accidentally operate based on the information in the first couple of pages
574 Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
575 buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages
576 per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be
577 optimized. When those optimized HugeTLB pages are freed from the HugeTLB pool
578 to the buddy allocator, the vmemmap pages representing that range needs to be
579 remapped again and the vmemmap pages discarded earlier need to be rellocated
580 again. If your use case is that HugeTLB pages are allocated 'on the fly' (e.g.
581 never explicitly allocating HugeTLB pages with 'nr_hugepages' but only set
582 'nr_overcommit_hugepages', those overcommitted HugeTLB pages are allocated 'on
585 of allocation or freeing HugeTLB pages between the HugeTLB pool and the buddy
587 pressure, it could prevent the user from freeing HugeTLB pages from the HugeTLB
588 pool to the buddy allocator since the allocation of vmemmap pages could be
591 Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
593 time from buddy allocator disappears, whereas already optimized HugeTLB pages
595 pages, you can set "nr_hugepages" to 0 first and then disable this. Note that
596 writing 0 to nr_hugepages will make any "in use" HugeTLB pages become surplus
597 pages. So, those surplus pages are still optimized until they are no longer
598 in use. You would need to wait for those surplus pages to be released before
599 there are no optimized pages in the system.
629 trims excess pages aggressively. Any value >= 1 acts as the watermark where
776 page-cluster controls the number of pages up to which consecutive pages
783 it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
786 The default value is three (eight pages at a time). There may be some
792 that consecutive pages readahead would have brought in.
835 This is the fraction of pages in each zone that are can be stored to
838 that we do not allow more than 1/8th of pages in each zone to be stored
895 cache and swap-backed pages equally; lower values signify more
911 file-backed pages is less than the high watermark in a zone.
976 reclaimed if pages of different mobility are being mixed within pageblocks.
979 allocations, THP and hugetlbfs pages.
987 worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor
1004 that the number of free pages kswapd maintains for latency reasons is
1021 2 Zone reclaim writes dirty pages out
1022 4 Zone reclaim swaps pages
1034 allocating off node pages.
1036 Allowing zone reclaim to write out pages stops processes that are
1037 writing large amounts of data from dirtying pages on other nodes. Zone
1038 reclaim will write out dirty pages if a zone fills up and so effectively