Lines Matching full:swap

6  *  Swap reorganised 29.12.95, Stephen Tweedie
16 #include <linux/swap.h>
56 * Some modules use swappable objects and may try to swap them out under
58 * check to see if any swap space is available.
65 static const char Bad_file[] = "Bad swap file entry ";
66 static const char Unused_file[] = "Unused swap file entry ";
67 static const char Bad_offset[] = "Bad swap offset entry ";
68 static const char Unused_offset[] = "Unused swap offset entry ";
115 /* Reclaim the swap entry anyway if possible */
118 * Reclaim the swap entry if there are no more mappings of the
122 /* Reclaim the swap entry if swap is getting full*/
125 /* returns 1 if swap entry is freed */
167 * swapon tell device that all the old swap contents can be discarded,
168 * to allow the swap device to optimize its wear-levelling.
177 /* Do not discard the swap header page! */ in discard_swap()
224 * swap allocation tell device that a cluster of swap can now be discarded,
225 * to allow the swap device to optimize its wear-levelling.
445 * taken by scan_swap_map(), mark the swap entries bad (occupied). It in swap_cluster_schedule_discard()
517 * If the swap is discardable, prepare discard the cluster in free_cluster()
595 * Try to get a swap entry from current cpu's swap entry pool (a cluster). This
740 * Cross the swap address space size aligned trunk, choose in set_cluster_next()
741 * another trunk randomly to avoid lock contention on swap in set_cluster_next()
746 /* No free swap slots available */ in set_cluster_next()
770 * We try to cluster swap pages by allocating them sequentially in scan_swap_map_slots()
771 * in swap. Once we've allocated SWAPFILE_CLUSTER pages this in scan_swap_map_slots()
773 * a new cluster. This prevents us from scattering swap pages in scan_swap_map_slots()
774 * all over the entire swap partition, so that we reduce in scan_swap_map_slots()
775 * overall disk seek times between swap pages. -- sct in scan_swap_map_slots()
777 * And we let swap pages go all over an SSD partition. Hugh in scan_swap_map_slots()
783 * cluster and swap cache. For HDD, sequential access is more in scan_swap_map_slots()
806 * start of partition, to minimize the span of allocated swap. in scan_swap_map_slots()
854 /* reuse swap entry of cache-only swap if not busy. */ in scan_swap_map_slots()
983 * page swap is disabled. Warn and fail the allocation. in swap_alloc_cluster()
1132 /* This is called for allocating swap entry, not cache */ in get_swap_page_of_type()
1258 * Check whether swap entry is valid in the swap device. If so,
1259 * return pointer to swap_info_struct, and keep the swap entry valid
1260 * via preventing the swap device from being swapoff, until
1286 * changing partly because the specified swap entry may be for another
1287 * swap device which has been swapoff. And in do_swap_page(), after
1288 * the page is read from the swap device, the PTE is verified not
1289 * changed with the page table locked to check whether the swap device
1354 * Caller has made sure that the swap device corresponding to entry
1367 * Called after dropping swapcache to decrease refcnt to swap entries.
1452 * Sort swap entries by swap device, so each lock is only taken once. in swapcache_free_entries()
1470 * This does not give an exact answer when swap count is continued,
1520 * This does not give an exact answer when swap count is continued,
1689 * to it. And as a side-effect, free up its swap: because the old content
1710 /* The remaining swap count will be freed soon */ in reuse_swap_page()
1734 * If swap is getting full, or if there are no more mappings of this page,
1735 * then try_to_free_swap is called to free its swap space.
1752 * hibernation is allocating its own swap pages for the image, in try_to_free_swap()
1754 * the swap from a page which has already been recorded in the in try_to_free_swap()
1755 * image as a clean swapcache page, and then reuse its swap for in try_to_free_swap()
1758 * later read back in from swap, now with the wrong data. in try_to_free_swap()
1773 * Free the swap entry like above, but also try to
1797 * Find the swap type that corresponds to given device (if any).
1800 * from 0, in which the swap header is expected to be located.
1851 * corresponding to given index in swap_info (swap type).
1864 * Return either the total number of swap pages of given type, or the number
1896 * No need to decide whether this PTE shares the swap entry with others,
2226 * swap cache just before we acquired the page lock. The page in try_to_unuse()
2227 * might even be back in swap cache on another swap area. But in try_to_unuse()
2246 * Lets check again to see if there are still swap entries in the map. in try_to_unuse()
2248 * Under global memory pressure, swap entries can be reinserted back in try_to_unuse()
2252 * that mm is likely to be freeing swap from exit_mmap(), which proceeds in try_to_unuse()
2254 * been preempted after get_swap_page(), temporarily hiding that swap. in try_to_unuse()
2267 * After a successful try_to_unuse, if no swap is now in use, we know
2288 * corresponds to page offset for the specified swap entry.
2307 * Returns the page offset into bdev for the specified page's swap entry.
2387 * A `swap extent' is a simple thing which maps a contiguous range of pages
2388 * onto a contiguous range of disk blocks. An ordered list of swap extents
2394 * swap files identically.
2396 * Whether the swapdev is an S_ISREG file or an S_ISBLK blockdev, the swap
2406 * For all swap devices we set S_SWAPFILE across the life of the swapon. This
2407 * prevents users from writing to the swap device, which will corrupt memory.
2409 * The amount of disk space which a single swap extent represents varies.
2468 * low-to-high, while swap ordering is high-to-low in setup_swap_info()
2499 * which allocates swap pages from the highest available priority in _enable_swap_info()
2629 /* re-insert swap space back into swap_list */ in SYSCALL_DEFINE1()
2639 p->flags &= ~SWP_VALID; /* mark swap device as invalid */ in SYSCALL_DEFINE1()
2643 * wait for swap operations protected by get/put_swap_device() in SYSCALL_DEFINE1()
2694 /* Destroy swap account information */ in SYSCALL_DEFINE1()
2747 static void *swap_start(struct seq_file *swap, loff_t *pos) in swap_start() argument
2768 static void *swap_next(struct seq_file *swap, void *v, loff_t *pos) in swap_next() argument
2788 static void swap_stop(struct seq_file *swap, void *v) in swap_stop() argument
2793 static int swap_show(struct seq_file *swap, void *v) in swap_show() argument
2801 seq_puts(swap,"Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority\n"); in swap_show()
2809 len = seq_file_path(swap, file, " \t\n\\"); in swap_show()
2810 seq_printf(swap, "%*s%s\t%u\t%s%u\t%s%d\n", in swap_show()
2952 * Find out how many pages are allowed for a single swap device. There
2954 * 1) the number of bits for the swap offset in the swp_entry_t type, and
2955 * 2) the number of bits in the swap pte, as defined by the different
2958 * In order to find the largest possible bit mask, a swap entry with
2959 * swap type 0 and swap offset ~0UL is created, encoded to a swap pte,
2960 * decoded to a swp_entry_t again, and finally the swap offset is
2965 * of a swap pte.
2989 pr_err("Unable to find swap-space signature\n"); in read_swap_header()
2993 /* swap partition endianess hack... */ in read_swap_header()
3003 /* Check the swap header's sub-version */ in read_swap_header()
3005 pr_warn("Unable to handle swap header version %d\n", in read_swap_header()
3017 pr_warn("Empty swap-file\n"); in read_swap_header()
3021 pr_warn("Truncating oversized swap area, only using %luk out of %luk\n", in read_swap_header()
3037 pr_warn("Swap area shorter than signature indicates\n"); in read_swap_header()
3108 pr_warn("Empty swap-file\n"); in setup_swap_map_and_extents()
3137 * Helper to sys_swapon determining if a given swap
3212 * Read the swap header. in SYSCALL_DEFINE2()
3231 /* OK, set up the swap map and apply the bad block list */ in SYSCALL_DEFINE2()
3307 * When discard is enabled for swap with no particular in SYSCALL_DEFINE2()
3308 * policy flagged, we set all swap discard flags here in in SYSCALL_DEFINE2()
3318 * perform discards for released swap page-clusters. in SYSCALL_DEFINE2()
3341 * swap device. in SYSCALL_DEFINE2()
3357 pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s%s%s%s\n", in SYSCALL_DEFINE2()
3431 * Verify that a swap entry is valid and increment its swap map count.
3437 * - swap-cache reference is requested but there is already one. -> EEXIST
3438 * - swap-cache reference is requested but the entry is not used. -> ENOENT
3439 * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
3460 * swapin_readahead() doesn't check if a swap entry is valid, so the in __swap_duplicate()
3461 * swap entry could be SWAP_MAP_BAD. Check here with lock held. in __swap_duplicate()
3493 err = -ENOENT; /* unused swap entry */ in __swap_duplicate()
3506 * Help swapoff by noting that swap entry belongs to shmem/tmpfs
3515 * Increase reference count of swap entry by 1.
3531 * @entry: swap entry for which we allocate swap cache.
3533 * Called when allocating swap cache for existing swap entry,
3535 * -EEXIST means there is a swap cache.
3565 swp_entry_t swap = { .val = page_private(page) }; in __page_file_index() local
3566 return swp_offset(swap); in __page_file_index()
3571 * add_swap_count_continuation - called when a swap count is duplicated
3574 * (for that entry and for its neighbouring PAGE_SIZE swap entries). Called
3606 * __swap_duplicate(): the swap device may be swapoff in add_swap_count_continuation()
3620 * The higher the swap count, the more likely it is that tasks in add_swap_count_continuation()
3621 * will race to add swap count continuation: we need to avoid in add_swap_count_continuation()
3694 * Called while __swap_duplicate() or swap_entry_free() holds swap or cluster
3812 * We've already scheduled a throttle, avoid taking the global swap in cgroup_throttle_swaprate()
3837 pr_emerg("Not enough memory for swap heads, swap is disabled\n"); in swapfile_init()