Home
last modified time | relevance | path

Searched full:pages (Results 1 – 25 of 2923) sorted by relevance

12345678910>>...117

/Linux-v5.15/Documentation/admin-guide/mm/
Dhugetlbpage.rst4 HugeTLB Pages
30 persistent hugetlb pages in the kernel's huge page pool. It also displays
32 and surplus huge pages in the pool of huge pages of default size.
48 is the size of the pool of huge pages.
50 is the number of huge pages in the pool that are not yet
53 is short for "reserved," and is the number of huge pages for
55 but no allocation has yet been made. Reserved huge pages
57 huge page from the pool of huge pages at fault time.
59 is short for "surplus," and is the number of huge pages in
61 maximum number of surplus huge pages is controlled by
[all …]
Dksm.rst20 which have been registered with it, looking for pages of identical
23 content). The amount of pages that KSM daemon scans in a single pass
27 KSM only merges anonymous (private) pages, never pagecache (file) pages.
28 KSM's merged pages were originally locked into kernel memory, but can now
29 be swapped out just like other user pages (but sharing is broken when they
47 to cancel that advice and restore unshared pages: whereupon KSM
57 cannot contain any pages which KSM could actually merge; even if
82 how many pages to scan before ksmd goes to sleep
94 specifies if pages from different NUMA nodes can be merged.
95 When set to 0, ksm merges only pages which physically reside
[all …]
Dconcepts.rst43 The physical system memory is divided into page frames, or pages. The
50 pages. These mappings are described by page tables that allow
55 addresses of actual pages used by the software. The tables at higher
56 levels contain physical addresses of the pages belonging to the lower
66 Huge Pages
77 Many modern CPU architectures allow mapping of the memory pages
79 it is possible to map 2M and even 1G pages using entries in the second
80 and the third level page tables. In Linux such pages are called
81 `huge`. Usage of huge pages significantly reduces pressure on TLB,
85 memory with the huge pages. The first one is `HugeTLB filesystem`, or
[all …]
Didle_page_tracking.rst10 The idle page tracking feature allows to track which memory pages are being
39 Only accesses to user memory pages are tracked. These are pages mapped to a
40 process address space, page cache and buffer pages, swap cache pages. For other
41 page types (e.g. SLAB pages) an attempt to mark a page idle is silently ignored,
42 and hence such pages are never reported idle.
44 For huge pages the idle flag is set only on the head page, so one has to read
45 ``/proc/kpageflags`` in order to correctly count idle huge pages.
52 That said, in order to estimate the amount of pages that are not used by a
55 1. Mark all the workload's pages as idle by setting corresponding bits in
56 ``/sys/kernel/mm/page_idle/bitmap``. The pages can be found by reading
[all …]
Dtranshuge.rst13 using huge pages for the backing of virtual memory with huge pages
53 collapses sequences of basic pages into huge pages.
109 pages unless hugepages are immediately available. Clearly if we spend CPU
111 use hugepages later instead of regular pages. This isn't always
125 allocation failure and directly reclaim pages and compact
132 to reclaim pages and wake kcompactd to compact memory so that
134 of khugepaged to then install the THP pages later.
140 pages and wake kcompactd to compact memory so that THP is
179 You can also control how many pages khugepaged should scan at each
194 The khugepaged progress can be seen in the number of pages collapsed::
[all …]
/Linux-v5.15/mm/
Dpercpu-vm.c23 * pcpu_get_pages - get temp pages array
30 * Pointer to temp pages array on success.
34 static struct page **pages; in pcpu_get_pages() local
35 size_t pages_size = pcpu_nr_units * pcpu_unit_pages * sizeof(pages[0]); in pcpu_get_pages()
39 if (!pages) in pcpu_get_pages()
40 pages = pcpu_mem_zalloc(pages_size, GFP_KERNEL); in pcpu_get_pages()
41 return pages; in pcpu_get_pages()
45 * pcpu_free_pages - free pages which were allocated for @chunk
46 * @chunk: chunk pages were allocated for
47 * @pages: array of pages to be freed, indexed by pcpu_page_idx()
[all …]
Dballoon_compaction.c5 * Common interface for making balloon pages movable by compaction.
30 * balloon_page_list_enqueue() - inserts a list of pages into the balloon page
33 * @pages: pages to enqueue - allocated using balloon_page_alloc.
35 * Driver must call this function to properly enqueue balloon pages before
38 * Return: number of pages that were enqueued.
41 struct list_head *pages) in balloon_page_list_enqueue() argument
48 list_for_each_entry_safe(page, tmp, pages, lru) { in balloon_page_list_enqueue()
59 * balloon_page_list_dequeue() - removes pages from balloon's page list and
60 * returns a list of the pages.
62 * @pages: pointer to the list of pages that would be returned to the caller.
[all …]
Dgup.c84 * So now that the head page is stable, recheck that the pages still in try_get_compound_head()
115 * FOLL_PIN on compound pages that are > two pages long: page's refcount will
119 * FOLL_PIN on normal pages, or compound pages that are two pages long:
221 * Pages that were pinned via pin_user_pages*() must be released via either
223 * that such pages can be separately tracked and uniquely handled. In
285 * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages
286 * @pages: array of pages to be maybe marked dirty, and definitely released.
287 * @npages: number of pages in the @pages array.
288 * @make_dirty: whether to mark the pages dirty
293 * For each page in the @pages array, make that page (or its head page, if a
[all …]
Dgup_test.c9 static void put_back_pages(unsigned int cmd, struct page **pages, in put_back_pages() argument
18 put_page(pages[i]); in put_back_pages()
24 unpin_user_pages(pages, nr_pages); in put_back_pages()
28 unpin_user_pages(pages, nr_pages); in put_back_pages()
31 put_page(pages[i]); in put_back_pages()
38 static void verify_dma_pinned(unsigned int cmd, struct page **pages, in verify_dma_pinned() argument
49 page = pages[i]; in verify_dma_pinned()
51 "pages[%lu] is NOT dma-pinned\n", i)) { in verify_dma_pinned()
57 "pages[%lu] is NOT pinnable but pinned\n", in verify_dma_pinned()
67 static void dump_pages_test(struct gup_test *gup, struct page **pages, in dump_pages_test() argument
[all …]
Dhugetlb_vmemmap.c3 * Free some vmemmap pages of HugeTLB
13 * HugeTLB pages consist of multiple base page size pages and is supported by
15 * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB
17 * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
18 * 4096 base pages. For each base page, there is a corresponding page struct.
23 * is the compound_head field, and this field is the same for all tail pages.
25 * By removing redundant page structs for HugeTLB pages, memory can be returned
28 * Different architectures support different HugeTLB pages. For example, the
30 * architectures. Because arm64 supports 4k, 16k, and 64k base pages and
47 * structs which size is (unit: pages):
[all …]
/Linux-v5.15/net/ceph/
Dpagevec.c13 void ceph_put_page_vector(struct page **pages, int num_pages, bool dirty) in ceph_put_page_vector() argument
19 set_page_dirty_lock(pages[i]); in ceph_put_page_vector()
20 put_page(pages[i]); in ceph_put_page_vector()
22 kvfree(pages); in ceph_put_page_vector()
26 void ceph_release_page_vector(struct page **pages, int num_pages) in ceph_release_page_vector() argument
31 __free_pages(pages[i], 0); in ceph_release_page_vector()
32 kfree(pages); in ceph_release_page_vector()
37 * allocate a vector new pages
41 struct page **pages; in ceph_alloc_page_vector() local
44 pages = kmalloc_array(num_pages, sizeof(*pages), flags); in ceph_alloc_page_vector()
[all …]
/Linux-v5.15/Documentation/vm/
Dunevictable-lru.rst15 pages.
30 pages and to hide these pages from vmscan. This mechanism is based on a patch
36 main memory will have over 32 million 4k pages in a single node. When a large
37 fraction of these pages are not evictable for any reason [see below], vmscan
39 of pages that are evictable. This can result in a situation where all CPUs are
43 The unevictable list addresses the following classes of unevictable pages:
51 The infrastructure may also be able to handle other conditions that make pages
66 The Unevictable LRU infrastructure maintains unevictable pages on an additional
69 (1) We get to "treat unevictable pages just like we treat other pages in the
74 (2) We want to be able to migrate unevictable pages between nodes for memory
[all …]
Dzswap.rst10 Zswap is a lightweight compressed cache for swap pages. It takes pages that are
34 Zswap evicts pages from compressed cache on an LRU basis to the backing swap
48 When zswap is disabled at runtime it will stop storing pages that are
50 back into memory all of the pages stored in the compressed pool. The
51 pages stored in zswap will remain in the compressed pool until they are
53 pages out of the compressed pool, a swapoff on the swap device(s) will
54 fault back into memory all swapped out pages, including those in the
60 Zswap receives pages for compression through the Frontswap API and is able to
61 evict pages from its own compressed pool on an LRU basis and write them back to
68 pages are freed. The pool is not preallocated. By default, a zpool
[all …]
Dpage_migration.rst7 Page migration allows moving the physical location of pages between
10 system rearranges the physical location of those pages.
13 for migrating pages to or from device private memory.
16 by moving pages near to the processor where the process accessing that memory
20 pages are located through the MF_MOVE and MF_MOVE_ALL options while setting
21 a new memory policy via mbind(). The pages of a process can also be relocated
23 migrate_pages() function call takes two sets of nodes and moves pages of a
30 pages of a process are located. See also the numa_maps documentation in the
35 administrator may detect the situation and move the pages of the process
38 through user space processes that move pages. A special function call
[all …]
/Linux-v5.15/fs/isofs/
Dcompress.c37 * to one zisofs block. Store the data in the @pages array with @pcount
42 struct page **pages, unsigned poffset, in zisofs_uncompress_block() argument
68 if (!pages[i]) in zisofs_uncompress_block()
70 memset(page_address(pages[i]), 0, PAGE_SIZE); in zisofs_uncompress_block()
71 flush_dcache_page(pages[i]); in zisofs_uncompress_block()
72 SetPageUptodate(pages[i]); in zisofs_uncompress_block()
122 if (pages[curpage]) { in zisofs_uncompress_block()
123 stream.next_out = page_address(pages[curpage]) in zisofs_uncompress_block()
175 if (pages[curpage]) { in zisofs_uncompress_block()
176 flush_dcache_page(pages[curpage]); in zisofs_uncompress_block()
[all …]
/Linux-v5.15/drivers/gpu/drm/i915/gem/selftests/
Dhuge_gem_object.c12 struct sg_table *pages) in huge_free_pages() argument
18 for_each_sgt_page(page, sgt_iter, pages) { in huge_free_pages()
24 sg_free_table(pages); in huge_free_pages()
25 kfree(pages); in huge_free_pages()
34 struct sg_table *pages; in huge_get_pages() local
37 pages = kmalloc(sizeof(*pages), GFP); in huge_get_pages()
38 if (!pages) in huge_get_pages()
41 if (sg_alloc_table(pages, npages, GFP)) { in huge_get_pages()
42 kfree(pages); in huge_get_pages()
46 sg = pages->sgl; in huge_get_pages()
[all …]
/Linux-v5.15/fs/erofs/
Dpcpubuf.c6 * per-CPU virtual memory (in pages) in advance to store such inplace I/O
15 struct page **pages; member
64 struct page **pages, **oldpages; in erofs_pcpubuf_growsize() local
67 pages = kmalloc_array(nrpages, sizeof(*pages), GFP_KERNEL); in erofs_pcpubuf_growsize()
68 if (!pages) { in erofs_pcpubuf_growsize()
74 pages[i] = erofs_allocpage(&pagepool, GFP_KERNEL); in erofs_pcpubuf_growsize()
75 if (!pages[i]) { in erofs_pcpubuf_growsize()
77 oldpages = pages; in erofs_pcpubuf_growsize()
81 ptr = vmap(pages, nrpages, VM_MAP, PAGE_KERNEL); in erofs_pcpubuf_growsize()
84 oldpages = pages; in erofs_pcpubuf_growsize()
[all …]
/Linux-v5.15/drivers/gpu/drm/xen/
Dxen_drm_front_gem.c30 struct page **pages; member
49 xen_obj->pages = kvmalloc_array(xen_obj->num_pages, in gem_alloc_pages_array()
51 return !xen_obj->pages ? -ENOMEM : 0; in gem_alloc_pages_array()
56 kvfree(xen_obj->pages); in gem_free_pages_array()
57 xen_obj->pages = NULL; in gem_free_pages_array()
108 * only allocate array of pointers to pages in gem_create()
115 * allocate ballooned pages which will be used to map in gem_create()
119 xen_obj->pages); in gem_create()
121 DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n", in gem_create()
131 * need to allocate backing pages now, so we can share those in gem_create()
[all …]
/Linux-v5.15/drivers/xen/
Dxlate_mmu.c47 /* Break down the pages in 4KB chunk and call fn for each gfn */
48 static void xen_for_each_gfn(struct page **pages, unsigned nr_gfn, in xen_for_each_gfn() argument
57 page = pages[i / XEN_PFN_PER_PAGE]; in xen_for_each_gfn()
71 struct page **pages; member
99 struct page *page = info->pages[info->index++]; in remap_pte_fn()
148 struct page **pages) in xen_xlate_remap_gfn_array() argument
163 data.pages = pages; in xen_xlate_remap_gfn_array()
184 int nr, struct page **pages) in xen_xlate_unmap_gfn_range() argument
186 xen_for_each_gfn(pages, nr, unmap_gfn, NULL); in xen_xlate_unmap_gfn_range()
205 * xen_xlate_map_ballooned_pages - map a new set of ballooned pages
[all …]
/Linux-v5.15/fs/ramfs/
Dfile-nommu.c58 * add a contiguous set of pages into a ramfs inode when it's truncated from
65 struct page *pages; in ramfs_nommu_expand_for_mapping() local
82 /* allocate enough contiguous pages to be able to satisfy the in ramfs_nommu_expand_for_mapping()
84 pages = alloc_pages(gfp, order); in ramfs_nommu_expand_for_mapping()
85 if (!pages) in ramfs_nommu_expand_for_mapping()
88 /* split the high-order page into an array of single pages */ in ramfs_nommu_expand_for_mapping()
92 split_page(pages, order); in ramfs_nommu_expand_for_mapping()
94 /* trim off any pages we don't actually require */ in ramfs_nommu_expand_for_mapping()
96 __free_page(pages + loop); in ramfs_nommu_expand_for_mapping()
100 data = page_address(pages); in ramfs_nommu_expand_for_mapping()
[all …]
/Linux-v5.15/include/xen/
Dxen-ops.h75 struct page **pages);
77 int nr, struct page **pages);
88 struct page **pages) in xen_xlate_remap_gfn_array() argument
94 int nr, struct page **pages) in xen_xlate_unmap_gfn_range() argument
105 * @vma: VMA to map the pages into
106 * @addr: Address at which to map the pages
111 * @domid: Domain owning the pages
112 * @pages: Array of pages if this domain has an auto-translated physmap
125 struct page **pages) in xen_remap_domain_gfn_array() argument
129 prot, domid, pages); in xen_remap_domain_gfn_array()
[all …]
/Linux-v5.15/kernel/dma/
Dremap.c15 return area->pages; in dma_common_find_pages()
19 * Remaps an array of PAGE_SIZE pages into another vm_area.
22 void *dma_common_pages_remap(struct page **pages, size_t size, in dma_common_pages_remap() argument
27 vaddr = vmap(pages, PAGE_ALIGN(size) >> PAGE_SHIFT, in dma_common_pages_remap()
30 find_vm_area(vaddr)->pages = pages; in dma_common_pages_remap()
42 struct page **pages; in dma_common_contiguous_remap() local
46 pages = kmalloc_array(count, sizeof(struct page *), GFP_KERNEL); in dma_common_contiguous_remap()
47 if (!pages) in dma_common_contiguous_remap()
50 pages[i] = nth_page(page, i); in dma_common_contiguous_remap()
51 vaddr = vmap(pages, count, VM_DMA_COHERENT, prot); in dma_common_contiguous_remap()
[all …]
/Linux-v5.15/Documentation/core-api/
Dpin_user_pages.rst35 In other words, use pin_user_pages*() for DMA-pinned pages, and
40 multiple threads and call sites are free to pin the same struct pages, via both
55 pages* array, and the function then pins pages by incrementing each by a special
58 For huge pages (and in fact, any compound page of more than 2 pages), the
63 This approach for compound pages avoids the counting upper limit problems that
65 huge pages, because each tail page adds a refcount to the head page. And in
69 This also means that huge pages and compound pages (of order > 1) do not suffer
80 but the caller passed in a non-null struct pages* array, then the function
81 sets FOLL_GET for you, and proceeds to pin pages by incrementing the refcount
90 Tracking dma-pinned pages
[all …]
/Linux-v5.15/drivers/misc/
Dvmw_balloon.c8 * acts like a "balloon" that can be inflated to reclaim physical pages by
10 * freeing up the underlying machine pages so they can be allocated to
53 /* Maximum number of refused pages we accumulate during inflation cycle */
149 * ballooned pages (up to 512).
151 * pages that are about to be deflated from the
154 * for 2MB pages.
157 * pages.
242 struct list_head pages; member
317 * @batch_max_pages: maximum pages that can be locked/unlocked.
319 * Indicates the number of pages that the hypervisor can lock or unlock
[all …]
/Linux-v5.15/fs/squashfs/
Dfile_direct.c22 int pages, struct page **page, int bytes);
36 int i, n, pages, missing_pages, bytes, res = -ENOMEM; in squashfs_readpage_block() local
44 pages = end_index - start_index + 1; in squashfs_readpage_block()
46 page = kmalloc_array(pages, sizeof(void *), GFP_KERNEL); in squashfs_readpage_block()
52 * page cache pages appropriately within the decompressor in squashfs_readpage_block()
54 actor = squashfs_page_actor_init_special(page, pages, 0); in squashfs_readpage_block()
58 /* Try to grab all the pages covered by the Squashfs block */ in squashfs_readpage_block()
59 for (missing_pages = 0, i = 0, n = start_index; i < pages; i++, n++) { in squashfs_readpage_block()
78 * Couldn't get one or more pages, this page has either in squashfs_readpage_block()
84 res = squashfs_read_cache(target_page, block, bsize, pages, in squashfs_readpage_block()
[all …]

12345678910>>...117