/Linux-v5.4/Documentation/core-api/ |
D | gfp_mask-from-fs-io.rst | 15 memory reclaim calling back into the FS or IO paths and blocking on 25 of GFP_NOFS/GFP_NOIO can lead to memory over-reclaim or other memory 26 reclaim issues. 44 any critical section with respect to the reclaim is started - e.g. 45 lock shared with the reclaim context or when a transaction context 46 nesting would be possible via reclaim. The restore function should be 48 explanation what is the reclaim context for easier maintenance.
|
D | memory-allocation.rst | 43 direct reclaim may be triggered under memory pressure; the calling 46 handler, use ``GFP_NOWAIT``. This flag prevents direct reclaim and 74 prevent recursion deadlocks caused by direct memory reclaim calling
|
D | workqueue.rst | 143 on code paths that handle memory reclaim are required to be queued on 190 All wq which might be used in the memory reclaim paths **MUST** 330 items which are used during memory reclaim. Each wq with 333 reclaim, they should be queued to separate wq each with 344 which are not involved in memory reclaim and don't need to be
|
/Linux-v5.4/Documentation/admin-guide/sysctl/ |
D | vm.rst | 260 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in 473 A percentage of the total pages in each zone. On Zone reclaim 477 systems that rarely perform global reclaim. 481 Note that slab reclaim is triggered in a per zone / node fashion. 491 This is a percentage of the total pages in each zone. Zone reclaim will 872 This percentage value controls the tendency of the kernel to reclaim 876 reclaim dentries and inodes at a "fair" rate with respect to pagecache and 877 swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer 879 never reclaim dentries and inodes due to memory pressure and this can easily 881 causes the kernel to prefer to reclaim dentries and inodes. [all …]
|
/Linux-v5.4/drivers/md/ |
D | dm-zoned-target.c | 52 struct dmz_reclaim *reclaim; member 389 dmz_schedule_reclaim(dmz->reclaim); in dmz_handle_bio() 559 dmz_reclaim_bio_acc(dmz->reclaim); in dmz_queue_chunk_work() 814 ret = dmz_ctr_reclaim(dev, dmz->metadata, &dmz->reclaim); in dmz_ctr() 852 dmz_dtr_reclaim(dmz->reclaim); in dmz_dtr() 921 dmz_suspend_reclaim(dmz->reclaim); in dmz_suspend() 933 dmz_resume_reclaim(dmz->reclaim); in dmz_resume()
|
D | dm-zoned-reclaim.c | 508 struct dmz_reclaim **reclaim) in dmz_ctr_reclaim() argument 538 *reclaim = zrc; in dmz_ctr_reclaim()
|
/Linux-v5.4/Documentation/admin-guide/device-mapper/ |
D | dm-zoned.rst | 27 internally for storing metadata and performaing reclaim operations. 104 situation, a reclaim process regularly scans used conventional zones and 105 tries to reclaim the least recently used zones by copying the valid 124 (for both incoming BIO processing and reclaim process) and all dirty
|
/Linux-v5.4/include/linux/ |
D | page-flags.h | 364 PAGEFLAG(Reclaim, reclaim, PF_NO_TAIL) in PAGEFLAG() 365 TESTCLEARFLAG(Reclaim, reclaim, PF_NO_TAIL) in PAGEFLAG() 366 PAGEFLAG(Readahead, reclaim, PF_NO_COMPOUND) in PAGEFLAG() 367 TESTCLEARFLAG(Readahead, reclaim, PF_NO_COMPOUND) in PAGEFLAG()
|
/Linux-v5.4/drivers/usb/host/ |
D | oxu210hp-hcd.c | 331 struct ehci_qh *reclaim; /* next to reclaim */ member 423 struct ehci_qh *reclaim; member 2085 struct ehci_qh *qh = oxu->reclaim; in end_unlink_async() 2095 next = qh->reclaim; in end_unlink_async() 2096 oxu->reclaim = next; in end_unlink_async() 2098 qh->reclaim = NULL; in end_unlink_async() 2117 oxu->reclaim = NULL; in end_unlink_async() 2132 BUG_ON(oxu->reclaim || (qh->qh_state != QH_STATE_LINKED in start_unlink_async() 2140 && !oxu->reclaim) { in start_unlink_async() 2151 oxu->reclaim = qh = qh_get(qh); in start_unlink_async() [all …]
|
/Linux-v5.4/include/linux/lockd/ |
D | xdr.h | 66 u32 reclaim; member
|
/Linux-v5.4/drivers/media/mmc/siano/ |
D | smssdio.c | 303 goto reclaim; in smssdio_probe() 307 reclaim: in smssdio_probe()
|
/Linux-v5.4/Documentation/accounting/ |
D | taskstats-struct.rst | 38 6) Extended delay accounting fields for memory reclaim 191 6) Extended delay accounting fields for memory reclaim:: 193 /* Delay waiting for memory reclaim */
|
D | delay-accounting.rst | 15 d) memory reclaim 48 delay seen for cpu, sync block I/O, swapin, memory reclaim etc.
|
/Linux-v5.4/Documentation/admin-guide/cgroup-v1/ |
D | memory.rst | 80 memory.force_empty trigger forced page reclaim 180 charged is over its limit. If it is, then reclaim is invoked on the cgroup. 181 More details can be found in the reclaim section of this document. 271 to reclaim memory from the cgroup so as to make space for the new 272 pages that the cgroup has touched. If the reclaim is unsuccessful, 276 The reclaim algorithm has not been modified for cgroups, except that 329 to trigger slab reclaim when those limits are reached. 377 In the current implementation, memory reclaim will NOT be 382 Since kmem charges will also be fed to the user counter and reclaim will be 616 Please note that unlike during the global reclaim, limit reclaim [all …]
|
D | hugetlb.rst | 7 support page reclaim, enforcing the limit at page fault time implies that,
|
/Linux-v5.4/fs/lockd/ |
D | clntproc.c | 278 if (host->h_reclaiming && !argp->reclaim) in nlmclnt_call() 308 if (argp->reclaim) { in nlmclnt_call() 314 if (!argp->reclaim) { in nlmclnt_call() 632 req->a_args.reclaim = 1; in nlmclnt_reclaim()
|
D | svc4proc.c | 146 argp->reclaim); in __nlm4svc_proc_lock() 365 if (locks_in_grace(SVC_NET(rqstp)) && !argp->reclaim) { in nlm4svc_proc_share()
|
/Linux-v5.4/Documentation/admin-guide/mm/ |
D | concepts.rst | 182 repurposing them is called (surprise!) `reclaim`. Linux can reclaim 193 will trigger `direct reclaim`. In this case allocation is stalled 211 Like reclaim, the compaction may happen asynchronously in the ``kcompactd`` 218 kernel will be unable to reclaim enough memory to continue to operate. In
|
D | transhuge.rst | 125 allocation failure and directly reclaim pages and compact 132 to reclaim pages and wake kcompactd to compact memory so that 137 will enter direct reclaim and compaction like ``always``, but 139 other regions will wake kswapd in the background to reclaim 144 will enter direct reclaim like ``always`` but only for regions
|
/Linux-v5.4/Documentation/vm/ |
D | z3fold.rst | 29 depend on MMU enabled and provides more predictable reclaim behavior
|
D | unevictable-lru.rst | 32 reclaim in Linux. The problems have been observed at customer sites on large 110 not attempt to reclaim pages on the unevictable list. This has a couple of 113 (1) Because the pages are "hidden" from reclaim on the unevictable list, the 114 reclaim process can be more efficient, dealing only with pages that have a 271 reclaim a page in a VM_LOCKED VMA via try_to_unmap() 333 it later if and when it attempts to reclaim the page. 408 This is fine, because we'll catch it later if and if vmscan tries to reclaim 536 try_to_unmap() is always called, by either vmscan for reclaim or for page 543 When trying to reclaim, if try_to_unmap_one() finds the page in a VM_LOCKED 550 munlock or munmap system calls, mm teardown (munlock_vma_pages_all), reclaim,
|
/Linux-v5.4/net/ipv4/ |
D | tcp_metrics.c | 153 bool reclaim = false; in tcpm_new() local 163 reclaim = true; in tcpm_new() 171 if (unlikely(reclaim)) { in tcpm_new() 192 if (likely(!reclaim)) { in tcpm_new()
|
/Linux-v5.4/Documentation/ABI/testing/ |
D | pstore | 28 device that it can reclaim the space for later re-use.
|
/Linux-v5.4/drivers/net/wireless/intel/iwlwifi/pcie/ |
D | rx.c | 1257 bool reclaim; in iwl_pcie_rx_handle_rb() local 1304 reclaim = !(pkt->hdr.sequence & SEQ_RX_FRAME); in iwl_pcie_rx_handle_rb() 1305 if (reclaim && !pkt->hdr.group_id) { in iwl_pcie_rx_handle_rb() 1311 reclaim = false; in iwl_pcie_rx_handle_rb() 1328 if (reclaim) { in iwl_pcie_rx_handle_rb() 1338 if (reclaim) { in iwl_pcie_rx_handle_rb()
|
/Linux-v5.4/Documentation/filesystems/ |
D | fuse-io.txt | 31 reclaim on memory pressure) or explicitly (invoked by close(2), fsync(2) and
|