/Linux-v5.10/mm/ |
D | percpu-vm.c | 22 * pcpu_get_pages - get temp pages array 29 * Pointer to temp pages array on success. 33 static struct page **pages; in pcpu_get_pages() local 34 size_t pages_size = pcpu_nr_units * pcpu_unit_pages * sizeof(pages[0]); in pcpu_get_pages() 38 if (!pages) in pcpu_get_pages() 39 pages = pcpu_mem_zalloc(pages_size, GFP_KERNEL); in pcpu_get_pages() 40 return pages; in pcpu_get_pages() 44 * pcpu_free_pages - free pages which were allocated for @chunk 45 * @chunk: chunk pages were allocated for 46 * @pages: array of pages to be freed, indexed by pcpu_page_idx() [all …]
|
D | balloon_compaction.c | 5 * Common interface for making balloon pages movable by compaction. 30 * balloon_page_list_enqueue() - inserts a list of pages into the balloon page 33 * @pages: pages to enqueue - allocated using balloon_page_alloc. 35 * Driver must call this function to properly enqueue balloon pages before 38 * Return: number of pages that were enqueued. 41 struct list_head *pages) in balloon_page_list_enqueue() argument 48 list_for_each_entry_safe(page, tmp, pages, lru) { in balloon_page_list_enqueue() 59 * balloon_page_list_dequeue() - removes pages from balloon's page list and 60 * returns a list of the pages. 62 * @pages: pointer to the list of pages that would be returned to the caller. [all …]
|
D | gup.c | 219 * Pages that were pinned via pin_user_pages*() must be released via either 221 * that such pages can be separately tracked and uniquely handled. In 231 * For devmap managed pages we need to catch refcount transition from in unpin_user_page() 252 * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages 253 * @pages: array of pages to be maybe marked dirty, and definitely released. 254 * @npages: number of pages in the @pages array. 255 * @make_dirty: whether to mark the pages dirty 260 * For each page in the @pages array, make that page (or its head page, if a 262 * listed as clean. In any case, releases all pages using unpin_user_page(), 273 void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, in unpin_user_pages_dirty_lock() argument [all …]
|
D | gup_benchmark.c | 24 static void put_back_pages(unsigned int cmd, struct page **pages, in put_back_pages() argument 33 put_page(pages[i]); in put_back_pages() 39 unpin_user_pages(pages, nr_pages); in put_back_pages() 44 static void verify_dma_pinned(unsigned int cmd, struct page **pages, in verify_dma_pinned() argument 55 page = pages[i]; in verify_dma_pinned() 57 "pages[%lu] is NOT dma-pinned\n", i)) { in verify_dma_pinned() 73 struct page **pages; in __gup_benchmark_ioctl() local 82 pages = kvcalloc(nr_pages, sizeof(void *), GFP_KERNEL); in __gup_benchmark_ioctl() 83 if (!pages) in __gup_benchmark_ioctl() 110 pages + i); in __gup_benchmark_ioctl() [all …]
|
/Linux-v5.10/Documentation/admin-guide/mm/ |
D | hugetlbpage.rst | 4 HugeTLB Pages 30 persistent hugetlb pages in the kernel's huge page pool. It also displays 32 and surplus huge pages in the pool of huge pages of default size. 48 is the size of the pool of huge pages. 50 is the number of huge pages in the pool that are not yet 53 is short for "reserved," and is the number of huge pages for 55 but no allocation has yet been made. Reserved huge pages 57 huge page from the pool of huge pages at fault time. 59 is short for "surplus," and is the number of huge pages in 61 maximum number of surplus huge pages is controlled by [all …]
|
D | ksm.rst | 20 which have been registered with it, looking for pages of identical 23 content). The amount of pages that KSM daemon scans in a single pass 27 KSM only merges anonymous (private) pages, never pagecache (file) pages. 28 KSM's merged pages were originally locked into kernel memory, but can now 29 be swapped out just like other user pages (but sharing is broken when they 47 to cancel that advice and restore unshared pages: whereupon KSM 57 cannot contain any pages which KSM could actually merge; even if 82 how many pages to scan before ksmd goes to sleep 94 specifies if pages from different NUMA nodes can be merged. 95 When set to 0, ksm merges only pages which physically reside [all …]
|
D | concepts.rst | 43 The physical system memory is divided into page frames, or pages. The 50 pages. These mappings are described by page tables that allow 55 addresses of actual pages used by the software. The tables at higher 56 levels contain physical addresses of the pages belonging to the lower 66 Huge Pages 77 Many modern CPU architectures allow mapping of the memory pages 79 it is possible to map 2M and even 1G pages using entries in the second 80 and the third level page tables. In Linux such pages are called 81 `huge`. Usage of huge pages significantly reduces pressure on TLB, 85 memory with the huge pages. The first one is `HugeTLB filesystem`, or [all …]
|
D | idle_page_tracking.rst | 10 The idle page tracking feature allows to track which memory pages are being 39 Only accesses to user memory pages are tracked. These are pages mapped to a 40 process address space, page cache and buffer pages, swap cache pages. For other 41 page types (e.g. SLAB pages) an attempt to mark a page idle is silently ignored, 42 and hence such pages are never reported idle. 44 For huge pages the idle flag is set only on the head page, so one has to read 45 ``/proc/kpageflags`` in order to correctly count idle huge pages. 52 That said, in order to estimate the amount of pages that are not used by a 55 1. Mark all the workload's pages as idle by setting corresponding bits in 56 ``/sys/kernel/mm/page_idle/bitmap``. The pages can be found by reading [all …]
|
D | transhuge.rst | 13 using huge pages for the backing of virtual memory with huge pages 53 collapses sequences of basic pages into huge pages. 109 pages unless hugepages are immediately available. Clearly if we spend CPU 111 use hugepages later instead of regular pages. This isn't always 125 allocation failure and directly reclaim pages and compact 132 to reclaim pages and wake kcompactd to compact memory so that 134 of khugepaged to then install the THP pages later. 140 pages and wake kcompactd to compact memory so that THP is 179 You can also control how many pages khugepaged should scan at each 194 The khugepaged progress can be seen in the number of pages collapsed:: [all …]
|
/Linux-v5.10/net/ceph/ |
D | pagevec.c | 13 void ceph_put_page_vector(struct page **pages, int num_pages, bool dirty) in ceph_put_page_vector() argument 19 set_page_dirty_lock(pages[i]); in ceph_put_page_vector() 20 put_page(pages[i]); in ceph_put_page_vector() 22 kvfree(pages); in ceph_put_page_vector() 26 void ceph_release_page_vector(struct page **pages, int num_pages) in ceph_release_page_vector() argument 31 __free_pages(pages[i], 0); in ceph_release_page_vector() 32 kfree(pages); in ceph_release_page_vector() 37 * allocate a vector new pages 41 struct page **pages; in ceph_alloc_page_vector() local 44 pages = kmalloc_array(num_pages, sizeof(*pages), flags); in ceph_alloc_page_vector() [all …]
|
/Linux-v5.10/Documentation/vm/ |
D | unevictable-lru.rst | 15 pages. 30 pages and to hide these pages from vmscan. This mechanism is based on a patch 36 main memory will have over 32 million 4k pages in a single zone. When a large 37 fraction of these pages are not evictable for any reason [see below], vmscan 39 of pages that are evictable. This can result in a situation where all CPUs are 43 The unevictable list addresses the following classes of unevictable pages: 51 The infrastructure may also be able to handle other conditions that make pages 66 The Unevictable LRU infrastructure maintains unevictable pages on an additional 69 (1) We get to "treat unevictable pages just like we treat other pages in the 74 (2) We want to be able to migrate unevictable pages between nodes for memory [all …]
|
D | zswap.rst | 10 Zswap is a lightweight compressed cache for swap pages. It takes pages that are 34 Zswap evicts pages from compressed cache on an LRU basis to the backing swap 48 When zswap is disabled at runtime it will stop storing pages that are 50 back into memory all of the pages stored in the compressed pool. The 51 pages stored in zswap will remain in the compressed pool until they are 53 pages out of the compressed pool, a swapoff on the swap device(s) will 54 fault back into memory all swapped out pages, including those in the 60 Zswap receives pages for compression through the Frontswap API and is able to 61 evict pages from its own compressed pool on an LRU basis and write them back to 68 pages are freed. The pool is not preallocated. By default, a zpool [all …]
|
D | page_migration.rst | 7 Page migration allows moving the physical location of pages between 10 system rearranges the physical location of those pages. 13 for migrating pages to or from device private memory. 16 by moving pages near to the processor where the process accessing that memory 20 pages are located through the MF_MOVE and MF_MOVE_ALL options while setting 21 a new memory policy via mbind(). The pages of a process can also be relocated 23 migrate_pages() function call takes two sets of nodes and moves pages of a 30 pages of a process are located. See also the numa_maps documentation in the 35 administrator may detect the situation and move the pages of the process 38 through user space processes that move pages. A special function call [all …]
|
/Linux-v5.10/drivers/gpu/drm/ttm/ |
D | ttm_page_alloc.c | 29 * - Pool collects resently freed pages for reuse 31 * - doesn't track currently in use pages 59 * struct ttm_page_pool - Pool to reuse recently allocated uc/wc pages. 65 * @list: Pool of free uc/wc pages for fast reuse. 67 * @npages: Number of pages in pool. 100 * @free_interval: minimum number of jiffies between freeing pages from pool. 103 * some pages to free. 104 * @small_allocation: Limit in number of pages what is small allocation. 164 /* Convert kb to number of pages */ in ttm_pool_store() 246 /* set memory back to wb and free the pages. */ [all …]
|
/Linux-v5.10/include/xen/ |
D | xen-ops.h | 67 unsigned int domid, bool no_translate, struct page **pages); 72 bool no_translate, struct page **pages) in xen_remap_pfn() argument 87 struct page **pages); 89 int nr, struct page **pages); 100 struct page **pages) in xen_xlate_remap_gfn_array() argument 106 int nr, struct page **pages) in xen_xlate_unmap_gfn_range() argument 117 * @vma: VMA to map the pages into 118 * @addr: Address at which to map the pages 123 * @domid: Domain owning the pages 124 * @pages: Array of pages if this domain has an auto-translated physmap [all …]
|
/Linux-v5.10/fs/isofs/ |
D | compress.c | 37 * to one zisofs block. Store the data in the @pages array with @pcount 42 struct page **pages, unsigned poffset, in zisofs_uncompress_block() argument 68 if (!pages[i]) in zisofs_uncompress_block() 70 memset(page_address(pages[i]), 0, PAGE_SIZE); in zisofs_uncompress_block() 71 flush_dcache_page(pages[i]); in zisofs_uncompress_block() 72 SetPageUptodate(pages[i]); in zisofs_uncompress_block() 122 if (pages[curpage]) { in zisofs_uncompress_block() 123 stream.next_out = page_address(pages[curpage]) in zisofs_uncompress_block() 175 if (pages[curpage]) { in zisofs_uncompress_block() 176 flush_dcache_page(pages[curpage]); in zisofs_uncompress_block() [all …]
|
/Linux-v5.10/drivers/gpu/drm/i915/gem/selftests/ |
D | huge_gem_object.c | 12 struct sg_table *pages) in huge_free_pages() argument 18 for_each_sgt_page(page, sgt_iter, pages) { in huge_free_pages() 24 sg_free_table(pages); in huge_free_pages() 25 kfree(pages); in huge_free_pages() 34 struct sg_table *pages; in huge_get_pages() local 37 pages = kmalloc(sizeof(*pages), GFP); in huge_get_pages() 38 if (!pages) in huge_get_pages() 41 if (sg_alloc_table(pages, npages, GFP)) { in huge_get_pages() 42 kfree(pages); in huge_get_pages() 46 sg = pages->sgl; in huge_get_pages() [all …]
|
/Linux-v5.10/drivers/gpu/drm/vkms/ |
D | vkms_gem.c | 37 WARN_ON(gem->pages); in vkms_gem_free_object() 61 if (obj->pages) { in vkms_gem_fault() 62 get_page(obj->pages[page_offset]); in vkms_gem_fault() 63 vmf->page = obj->pages[page_offset]; in vkms_gem_fault() 155 if (!vkms_obj->pages) { in _get_pages() 156 struct page **pages = drm_gem_get_pages(gem_obj); in _get_pages() local 158 if (IS_ERR(pages)) in _get_pages() 159 return pages; in _get_pages() 161 if (cmpxchg(&vkms_obj->pages, NULL, pages)) in _get_pages() 162 drm_gem_put_pages(gem_obj, pages, false, true); in _get_pages() [all …]
|
/Linux-v5.10/drivers/gpu/drm/xen/ |
D | xen_drm_front_gem.c | 30 struct page **pages; member 49 xen_obj->pages = kvmalloc_array(xen_obj->num_pages, in gem_alloc_pages_array() 51 return !xen_obj->pages ? -ENOMEM : 0; in gem_alloc_pages_array() 56 kvfree(xen_obj->pages); in gem_free_pages_array() 57 xen_obj->pages = NULL; in gem_free_pages_array() 93 * only allocate array of pointers to pages in gem_create() 100 * allocate ballooned pages which will be used to map in gem_create() 104 xen_obj->pages); in gem_create() 106 DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n", in gem_create() 116 * need to allocate backing pages now, so we can share those in gem_create() [all …]
|
/Linux-v5.10/drivers/xen/ |
D | xlate_mmu.c | 47 /* Break down the pages in 4KB chunk and call fn for each gfn */ 48 static void xen_for_each_gfn(struct page **pages, unsigned nr_gfn, in xen_for_each_gfn() argument 57 page = pages[i / XEN_PFN_PER_PAGE]; in xen_for_each_gfn() 71 struct page **pages; member 99 struct page *page = info->pages[info->index++]; in remap_pte_fn() 148 struct page **pages) in xen_xlate_remap_gfn_array() argument 163 data.pages = pages; in xen_xlate_remap_gfn_array() 184 int nr, struct page **pages) in xen_xlate_unmap_gfn_range() argument 186 xen_for_each_gfn(pages, nr, unmap_gfn, NULL); in xen_xlate_unmap_gfn_range() 205 * xen_xlate_map_ballooned_pages - map a new set of ballooned pages [all …]
|
/Linux-v5.10/drivers/gpu/drm/amd/amdgpu/ |
D | amdgpu_gart.c | 41 * in the GPU's address space. System pages can be mapped into 42 * the aperture and look like contiguous pages from the GPU's 43 * perspective. A page table maps the pages in the aperture 44 * to the actual backing pages in system memory. 69 * when pages are taken out of the GART 211 * amdgpu_gart_unbind - unbind pages from the gart page table 215 * @pages: number of pages to unbind 217 * Unbinds the requested pages from the gart page table and 222 int pages) in amdgpu_gart_unbind() argument 238 for (i = 0; i < pages; i++, p++) { in amdgpu_gart_unbind() [all …]
|
/Linux-v5.10/fs/ramfs/ |
D | file-nommu.c | 58 * add a contiguous set of pages into a ramfs inode when it's truncated from 65 struct page *pages; in ramfs_nommu_expand_for_mapping() local 82 /* allocate enough contiguous pages to be able to satisfy the in ramfs_nommu_expand_for_mapping() 84 pages = alloc_pages(gfp, order); in ramfs_nommu_expand_for_mapping() 85 if (!pages) in ramfs_nommu_expand_for_mapping() 88 /* split the high-order page into an array of single pages */ in ramfs_nommu_expand_for_mapping() 92 split_page(pages, order); in ramfs_nommu_expand_for_mapping() 94 /* trim off any pages we don't actually require */ in ramfs_nommu_expand_for_mapping() 96 __free_page(pages + loop); in ramfs_nommu_expand_for_mapping() 100 data = page_address(pages); in ramfs_nommu_expand_for_mapping() [all …]
|
/Linux-v5.10/drivers/gpu/drm/i915/gem/ |
D | i915_gem_pages.c | 14 struct sg_table *pages, in __i915_gem_object_set_pages() argument 26 /* Make the pages coherent with the GPU (flushing any swapin). */ in __i915_gem_object_set_pages() 30 drm_clflush_sg(pages); in __i915_gem_object_set_pages() 34 obj->mm.get_page.sg_pos = pages->sgl; in __i915_gem_object_set_pages() 37 obj->mm.pages = pages; in __i915_gem_object_set_pages() 54 * 64K or 4K pages, although in practice this will depend on a number of in __i915_gem_object_set_pages() 101 /* Ensure that the associated pages are gathered from the backing storage 104 * i915_gem_object_unpin_pages() - once the pages are no longer referenced 105 * either as a result of memory pressure (reaping pages under the shrinker) 140 /* Try to discard unwanted pages */ [all …]
|
/Linux-v5.10/kernel/dma/ |
D | remap.c | 15 return area->pages; in dma_common_find_pages() 19 * Remaps an array of PAGE_SIZE pages into another vm_area. 22 void *dma_common_pages_remap(struct page **pages, size_t size, in dma_common_pages_remap() argument 27 vaddr = vmap(pages, PAGE_ALIGN(size) >> PAGE_SHIFT, in dma_common_pages_remap() 30 find_vm_area(vaddr)->pages = pages; in dma_common_pages_remap() 42 struct page **pages; in dma_common_contiguous_remap() local 46 pages = kmalloc_array(count, sizeof(struct page *), GFP_KERNEL); in dma_common_contiguous_remap() 47 if (!pages) in dma_common_contiguous_remap() 50 pages[i] = nth_page(page, i); in dma_common_contiguous_remap() 51 vaddr = vmap(pages, count, VM_DMA_COHERENT, prot); in dma_common_contiguous_remap() [all …]
|
/Linux-v5.10/Documentation/core-api/ |
D | pin_user_pages.rst | 35 In other words, use pin_user_pages*() for DMA-pinned pages, and 40 multiple threads and call sites are free to pin the same struct pages, via both 55 pages* array, and the function then pins pages by incrementing each by a special 58 For huge pages (and in fact, any compound page of more than 2 pages), the 63 This approach for compound pages avoids the counting upper limit problems that 65 huge pages, because each tail page adds a refcount to the head page. And in 69 This also means that huge pages and compound pages (of order > 1) do not suffer 80 but the caller passed in a non-null struct pages* array, then the function 81 sets FOLL_GET for you, and proceeds to pin pages by incrementing the refcount 90 Tracking dma-pinned pages [all …]
|