Lines Matching full:migration

3  * Memory Migration functionality - linux/mm/migrate.c
7 * Page migration was first developed in the context of the memory hotplug
8 * project. The main authors of the migration code are:
86 * compaction threads can race against page migration functions in isolate_movable_page()
90 * being (wrongly) re-isolated while it is under migration, in isolate_movable_page()
133 * from where they were once taken off for compaction/migration.
173 * Restore a potential migration pte to a working pte entry
197 /* PMD-mapped THP migration entry */ in remove_migration_pte()
211 * Recheck VMA as permissions can change since migration started in remove_migration_pte()
268 * Get rid of all migration entries and replace them by
285 * Something used the pte of a page under migration. We need to
286 * get to the page and wait until migration is finished.
309 * Once page cache replacement of page migration started, page_count in __migration_entry_wait()
628 * Migration functions
681 * async migration. Release the taken locks in buffer_migrate_lock_buffers()
777 * Migration function for pages with buffers. This function can only be used
826 * migration. Writeout may mean we loose the lock and the in writeout()
828 * At this point we know that the migration attempt cannot in writeout()
843 * Default handling if a filesystem does not provide a migration function.
849 /* Only writeback pages in full synchronous migration */ in fallback_migrate_page()
903 * for page migration. in move_to_new_page()
913 * isolation step. In that case, we shouldn't try migration. in move_to_new_page()
992 * Only in the case of a full synchronous migration is it in __unmap_and_move()
1014 * of migration. File cache pages are no problem because of page_lock() in __unmap_and_move()
1015 * File Caches may use write_page() or lock_page() in migration, then, in __unmap_and_move()
1062 /* Establish migration ptes */ in __unmap_and_move()
1085 * If migration is successful, decrease refcount of the newpage in __unmap_and_move()
1119 * Node 2. The migration path start on the nodes with the
1226 * If migration is successful, releases reference grabbed during in unmap_and_move()
1259 * Counterpart of unmap_and_move_page() for hugepage migration.
1262 * because there is no race between I/O and migration for hugepage.
1270 * hugepage migration fails without data corruption.
1272 * There is also no race when direct I/O is issued on the page under migration,
1273 * because then pte is replaced with migration swap entry and direct I/O code
1274 * will wait in the page fault for migration to complete.
1290 * This check is necessary because some callers of hugepage migration in unmap_and_move_huge_page()
1293 * kicking migration. in unmap_and_move_huge_page()
1393 * If migration was not successful and there's a freeing callback, use in unmap_and_move_huge_page()
1421 * supplied as the target for the page migration
1425 * as the target of the page migration.
1426 * @put_new_page: The function used to free target pages if migration
1429 * @mode: The migration mode that specifies the constraints for
1430 * page migration, if any.
1431 * @reason: The reason for page migration.
1476 * during migration. in migrate_pages()
1502 * THP migration might be unsupported or the in migrate_pages()
1513 /* THP migration is unsupported */ in migrate_pages()
1525 /* Hugetlb migration is unsupported */ in migrate_pages()
1565 * removed from migration page list and not in migrate_pages()
1583 * Put the permanent failure page back to migration list, they in migrate_pages()
1629 * clear __GFP_RECLAIM to make the migration callback in alloc_migration_target()
1836 /* The page is successfully queued for migration */ in do_pages_move()
2052 * Returns true if this is a safe migration target node for misplaced NUMA
2134 * disappearing underneath us during migration. in numamigrate_isolate_page()
2364 * any kind of migration. Side effect is that it "freezes" the in migrate_vma_collect_pmd()
2377 * set up a special migration page table entry now. in migrate_vma_collect_pmd()
2385 /* Setup special migration page table entry */ in migrate_vma_collect_pmd()
2439 * @migrate: migrate struct containing all migration information
2471 * migrate_page_move_mapping(), except that here we allow migration of a
2495 * GUP will fail for those. Yet if there is a pending migration in migrate_vma_check_page()
2496 * a thread might try to wait on the pte migration entry and in migrate_vma_check_page()
2498 * differentiate a regular pin from migration wait. Hence to in migrate_vma_check_page()
2500 * infinite loop (one stopping migration because the other is in migrate_vma_check_page()
2501 * waiting on pte migration entry). We always return true here. in migrate_vma_check_page()
2521 * @migrate: migrate struct containing all migration information
2547 * a deadlock between 2 concurrent migration where each in migrate_vma_prepare()
2628 * migrate_vma_unmap() - replace page mapping with special migration pte entry
2629 * @migrate: migrate struct containing all migration information
2631 * Replace page mapping (CPU page table pte) with a special migration pte entry
2686 * @args: contains the vma, start, and pfns arrays for the migration
2707 * with MIGRATE_PFN_MIGRATE flag in src array unless this is a migration from
2717 * properly set the destination entry like for regular migration. Note that
2719 * migration was successful for those entries after calling migrate_vma_pages(),
2720 * just like for regular migration.
2934 * @migrate: migrate struct containing all migration information
2937 * struct page. This effectively finishes the migration from source page to the
3015 * @migrate: migrate struct containing all migration information
3017 * This replaces the special migration pte entry with either a mapping to the
3018 * new page if migration was successful for that page, or to the original page
3070 /* Disable reclaim-based migration. */
3108 * Can not set a migration target on a in establish_migrate_target()
3132 * Establish a "migration path" which will start at nodes
3155 * a momentary gap when migration is disabled. in __set_migration_target_nodes()
3161 * the migration path starts at the nodes with CPUs. in __set_migration_target_nodes()
3168 * To avoid cycles in the migration "graph", ensure in __set_migration_target_nodes()
3169 * that migration sources are not future targets by in __set_migration_target_nodes()
3193 * 'next_pass' contains nodes which became migration in __set_migration_target_nodes()
3214 * whether reclaim-based migration is enabled or not, which
3215 * ensures that the user can turn reclaim-based migration at
3216 * any time without needing to recalculate migration targets.
3228 * Only update the node migration order when a node is in migrate_on_reclaim_callback()
3239 * an offline node is a migration target. This in migrate_on_reclaim_callback()
3240 * will leave migration disabled until the offline in migrate_on_reclaim_callback()
3255 * MEM_GOING_OFFLINE disabled all the migration in migrate_on_reclaim_callback()
3269 * React to hotplug events that might affect the migration targets
3295 * migration targets may become suboptimal for nodes in migrate_on_reclaim_init()