Home
last modified time | relevance | path

Searched full:reclaim (Results 1 – 25 of 432) sorted by relevance

12345678910>>...18

/Linux-v6.1/tools/testing/selftests/cgroup/
Dmemcg_protection.m5 % This script simulates reclaim protection behavior on a single level of memcg
10 % reclaim) and then the reclaim starts, all memory is reclaimable, i.e. treated
11 % same. It simulates only non-low reclaim and assumes all memory.min = 0.
24 % Reclaim parameters
27 % Minimal reclaim amount (GB)
30 % Reclaim coefficient (think as 0.5^sc->priority)
72 % nothing to reclaim, reached equilibrium
79 % XXX here I do parallel reclaim of all siblings
80 % in reality reclaim is serialized and each sibling recalculates own residual
/Linux-v6.1/drivers/md/
Ddm-zoned-reclaim.c12 #define DM_MSG_PREFIX "zoned reclaim"
33 * Reclaim state flags.
45 * Percentage of unmapped (free) random zones below which reclaim starts
51 * Percentage of unmapped (free) random zones above which reclaim will
338 * Reclaim an empty zone.
362 * Find a candidate zone for reclaim and process it.
376 DMDEBUG("(%s/%u): No zone found to reclaim", in dmz_do_reclaim()
390 * Reclaim the random data zone by moving its in dmz_do_reclaim()
412 * Reclaim the data zone by merging it into the in dmz_do_reclaim()
422 DMDEBUG("(%s/%u): reclaim zone %u interrupted", in dmz_do_reclaim()
[all …]
/Linux-v6.1/Documentation/core-api/
Dmemory-allocation.rst43 direct reclaim may be triggered under memory pressure; the calling
46 handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and
74 prevent recursion deadlocks caused by direct memory reclaim calling
87 GFP flags and reclaim behavior
89 Memory allocations may trigger direct or background reclaim and it is
95 doesn't kick the background reclaim. Should be used carefully because it
97 reclaim.
101 context but can wake kswapd to reclaim memory if the zone is below
111 * ``GFP_KERNEL`` - both background and direct reclaim are allowed and the
119 reclaim (one round of reclaim in this implementation). The OOM killer
[all …]
Dgfp_mask-from-fs-io.rst15 memory reclaim calling back into the FS or IO paths and blocking on
25 of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory
26 reclaim issues.
44 any critical section with respect to the reclaim is started - e.g.
45 lock shared with the reclaim context or when a transaction context
46 nesting would be possible via reclaim. The restore function should be
48 explanation what is the reclaim context for easier maintenance.
/Linux-v6.1/include/linux/
Dgfp_types.h121 * %__GFP_ATOMIC indicates that the caller cannot reclaim or sleep and is
144 * DOC: Reclaim modifiers
146 * Reclaim modifiers
157 * %__GFP_DIRECT_RECLAIM indicates that the caller may enter direct reclaim.
162 * the low watermark is reached and have it reclaim pages until the high
164 * options are available and the reclaim is likely to disrupt the system. The
166 * reclaim/compaction may cause indirect stalls.
168 * %__GFP_RECLAIM is shorthand to allow/forbid both direct and kswapd reclaim.
180 * memory direct reclaim to get some memory under memory pressure (thus
186 * %__GFP_RETRY_MAYFAIL: The VM implementation will retry memory reclaim
[all …]
Dcompaction.h7 * Lower value means higher priority, analogically to reclaim priority.
25 * compaction didn't start as it was not possible or direct reclaim
129 /* Compaction needs reclaim to be performed first, so it can continue. */
134 * so the regular reclaim has to try harder and reclaim something. in compaction_needs_reclaim()
154 * instead of entering direct reclaim. in compaction_withdrawn()
Dshrinker.h6 * This struct is used to pass information from page reclaim to the shrinkers.
19 * How many objects scan_objects should scan and try to reclaim.
55 * attempts to call the @scan_objects will be made from the current reclaim
66 long batch; /* reclaim batch size, 0 = default */
/Linux-v6.1/Documentation/admin-guide/device-mapper/
Ddm-zoned.rst27 internally for storing metadata and performing reclaim operations.
108 situation, a reclaim process regularly scans used conventional zones and
109 tries to reclaim the least recently used zones by copying the valid
128 (for both incoming BIO processing and reclaim process) and all dirty
184 Normally the reclaim process will be started once there are less than 50
185 percent free random zones. In order to start the reclaim process manually
191 dmsetup message /dev/dm-X 0 reclaim
193 will start the reclaim process and random zones will be moved to sequential
/Linux-v6.1/mm/
Dvmscan.c72 /* How many pages shrink_list() should reclaim */
83 * primary target of this reclaim invocation.
93 /* Can active folios be deactivated as part of reclaim? */
106 /* Can folios be swapped as part of reclaim? */
109 /* Proactive reclaim invoked by userspace through memory.reclaim */
149 /* The highest zone to isolate folios for reclaim from */
558 * For non-memcg reclaim, is there in can_reclaim_anon_pages()
579 * As the data only determines if reclaim or compaction continues, it is
824 * we will try to reclaim all available objects, otherwise we can end in do_shrink_slab()
832 * scanning at high prio and therefore should try to reclaim as much as in do_shrink_slab()
[all …]
Dswap.c220 * safe side, underestimate, let page reclaim fix it, rather in lru_add_fn()
278 * immediate reclaim. If it still appears to be reclaimable, move it
305 * 1) The pinned lruvec in reclaim, or in lru_note_cost()
559 * inactive list to speed up its reclaim. It is moved to the
562 * effective than the single-page writeout from reclaim.
565 * could be reclaimed asap using the reclaim flag.
568 * 2. active, dirty/writeback folio -> inactive, head, reclaim
570 * 4. inactive, dirty/writeback folio -> inactive, head, reclaim
576 * than the single-page writeout from reclaim.
596 * Setting the reclaim flag could race with in lru_deactivate_file_fn()
[all …]
Dworkingset.c25 * the head of the inactive list and page reclaim scans pages from the
27 * are promoted to the active list, to protect them from reclaim,
34 * reclaim <- | inactive | <-+-- demotion | active | <--+
153 * actively used cache from reclaim. The cache is NOT transitioning to
322 * to the in-memory dimensions. This function allows reclaim and LRU
345 * @target_memcg: the cgroup that is causing the reclaim
444 * unconditionally with *every* reclaim invocation for the in workingset_refault()
453 * during folio reclaim is being determined. in workingset_refault()
535 * track shadow nodes and reclaim them when they grow way past the
597 * each, this will reclaim shadow entries when they consume in count_shadow_nodes()
[all …]
Dzbud.c15 * reclaim properties that make it preferable to a higher density approach when
16 * reclaim will be used.
339 * reclaim, as indicated by the PG_reclaim flag being set, this function
358 /* zbud page is under reclaim, reclaim will free */ in zbud_free()
386 * zbud reclaim is different from normal system reclaim in that the reclaim is
405 * contains logic to delay freeing the page if the page is under reclaim,
/Linux-v6.1/drivers/gpu/drm/amd/amdgpu/
Damdgpu_mes.h379 * A bit more detail about why to set no-FS reclaim with MES lock:
396 * notifiers can be called in reclaim-FS context. That's where the
398 * memory pressure. While we are running in reclaim-FS context, we must
399 * not trigger another memory reclaim operation because that would
400 * recursively reenter the reclaim code and cause a deadlock. The
406 * Thread A: takes and holds reservation lock | triggers reclaim-FS |
411 * triggering a reclaim-FS operation itself.
419 * As a result, make sure no reclaim-FS happens while holding this lock anywhere
420 * to prevent deadlocks when an MMU notifier runs in reclaim-FS context.
/Linux-v6.1/Documentation/ABI/testing/
Dsysfs-kernel-mm-numa9 Description: Enable/disable demoting pages during reclaim
11 Page migration during reclaim is intended for systems
16 Allowing page migration during reclaim enables these
/Linux-v6.1/Documentation/trace/postprocess/
Dtrace-vmscan-postprocess.pl3 # page reclaim. It makes an attempt to extract some high-level information on
325 # Record how long direct reclaim took this time
482 printf("Reclaim latencies expressed as order-latency_in_ms\n") if !$opt_ignorepid;
638 print "Direct reclaim pages scanned: $total_direct_nr_scanned\n";
639 print "Direct reclaim file pages scanned: $total_direct_nr_file_scanned\n";
640 print "Direct reclaim anon pages scanned: $total_direct_nr_anon_scanned\n";
641 print "Direct reclaim pages reclaimed: $total_direct_nr_reclaimed\n";
642 print "Direct reclaim file pages reclaimed: $total_direct_nr_file_reclaimed\n";
643 print "Direct reclaim anon pages reclaimed: $total_direct_nr_anon_reclaimed\n";
644 print "Direct reclaim write file sync I/O: $total_direct_writepage_file_sync\n";
[all …]
/Linux-v6.1/fs/xfs/
Dxfs_icache.c185 * Queue background inode reclaim work if there are reclaimable inodes and there
186 * isn't reclaim work already scheduled or in progress.
273 * Reclaim can signal (with a null agino) that it cleared its own tag in xfs_perag_clear_inode_tag()
347 * the actual reclaim workers from stomping over us while we recycle in xfs_iget_recycle()
361 * trouble. Try to re-add it to the reclaim list. in xfs_iget_recycle()
811 * Grab the inode for reclaim exclusively.
818 * avoid inodes that are no longer reclaim candidates.
822 * ensured that we are able to reclaim this inode and the world can see that we
823 * are going to reclaim it.
837 /* not a reclaim candidate. */ in xfs_reclaim_igrab()
[all …]
/Linux-v6.1/Documentation/admin-guide/mm/
Dmultigen_lru.rst7 page reclaim and improves performance under memory pressure. Page
8 reclaim decides the kernel's caching policy and ability to overcommit
138 Proactive reclaim
140 Proactive reclaim induces page reclaim when there is no memory
142 comes in, the job scheduler wants to proactively reclaim cold pages on
Dconcepts.rst157 Reclaim chapter
182 repurposing them is called (surprise!) `reclaim`. Linux can reclaim
193 will trigger `direct reclaim`. In this case allocation is stalled
211 Like reclaim, the compaction may happen asynchronously in the ``kcompactd``
218 kernel will be unable to reclaim enough memory to continue to operate. In
/Linux-v6.1/Documentation/admin-guide/sysctl/
Dvm.rst274 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
487 A percentage of the total pages in each zone. On Zone reclaim
491 systems that rarely perform global reclaim.
495 Note that slab reclaim is triggered in a per zone / node fashion.
505 This is a percentage of the total pages in each zone. Zone reclaim will
954 This percentage value controls the tendency of the kernel to reclaim
958 reclaim dentries and inodes at a "fair" rate with respect to pagecache and
959 swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
961 never reclaim dentries and inodes due to memory pressure and this can easily
963 causes the kernel to prefer to reclaim dentries and inodes.
[all …]
/Linux-v6.1/fs/lockd/
Dclntlock.c40 unsigned short b_reclaim; /* got to reclaim lock */
210 * Reclaim all locks on server host. We do this by spawning a separate
220 task = kthread_run(reclaimer, host, "%s-reclaim", host->h_name); in nlmclnt_recovery()
258 /* First, reclaim all locks that have been granted. */ in reclaimer()
266 * the kernel will not attempt to reclaim them again if a new in reclaimer()
/Linux-v6.1/Documentation/admin-guide/cgroup-v1/
Dmemory.rst84 memory.force_empty trigger forced page reclaim
182 charged is over its limit. If it is, then reclaim is invoked on the cgroup.
183 More details can be found in the reclaim section of this document.
263 2.5 Reclaim
268 to reclaim memory from the cgroup so as to make space for the new
269 pages that the cgroup has touched. If the reclaim is unsuccessful,
273 The reclaim algorithm has not been modified for cgroups, except that
278 Reclaim does not work for the root cgroup, since we cannot set any
323 to trigger slab reclaim when those limits are reached.
371 In the current implementation, memory reclaim will NOT be
[all …]
/Linux-v6.1/Documentation/accounting/
Ddelay-accounting.rst15 d) memory reclaim
51 delay seen for cpu, sync block I/O, swapin, memory reclaim, thrash page
115 RECLAIM count delay total delay average
/Linux-v6.1/fs/btrfs/
Dspace-info.c103 * reclaim space so we can make new reservations.
110 * into a single operation done on demand. These are an easy way to reclaim
115 * for delayed allocation. We can reclaim some of this space simply by
117 * reclaim the bulk of this space.
122 * to reclaim space, but we want to hold this until the end because COW can
187 * scheduled for background reclaim.
595 * reclaim, but reclaiming that much data doesn't really track in shrink_delalloc()
596 * exactly. What we really want to do is reclaim full inode's in shrink_delalloc()
601 * will reclaim the metadata reservation for that range. If in shrink_delalloc()
863 /* If we're just plain full then async reclaim just slows us down. */ in need_preemptive_reclaim()
[all …]
/Linux-v6.1/include/linux/sched/
Dsd_flags.h56 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level.
64 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level.
80 * SHARED_CHILD: Set from the base domain up to the NUMA reclaim level.
/Linux-v6.1/arch/x86/kernel/cpu/sgx/
Dmain.c285 * reclaim them to the enclave's private shmem files. Skip the pages, which have
515 * sgx_unmark_page_reclaimable() - Remove a page from the reclaim list
545 * @reclaim: reclaim pages if necessary
549 * @reclaim is set to true, directly reclaim pages when we are out of pages. No
550 * mm's can be locked when @reclaim is set to true.
559 struct sgx_epc_page *sgx_alloc_epc_page(void *owner, bool reclaim) in sgx_alloc_epc_page() argument
573 if (!reclaim) { in sgx_alloc_epc_page()

12345678910>>...18