Lines Matching +full:reserved +full:- +full:memory

4  * SPDX-License-Identifier: Apache-2.0
14 * @defgroup kernel_mm_internal_apis Kernel Memory Management Internal APIs
41 #define K_MEM_VIRT_OFFSET ((CONFIG_KERNEL_VM_BASE + CONFIG_KERNEL_VM_OFFSET) - \
56 #define K_MEM_PHYS_ADDR(virt) ((virt) - K_MEM_VIRT_OFFSET)
71 * @brief Kernel is mapped in virtual memory if defined.
75 #error "XIP and a virtual memory kernel are not allowed"
117 /* Should be identity-mapped */ in k_mem_phys_addr()
181 * Map a physical memory region into the kernel's virtual address space
183 * This function is intended for mapping memory-mapped I/O regions into
188 * The memory mapped via this function must be unmapped using
191 * This function alters the active page tables in the area reserved
199 * Unused bits in 'flags' are reserved for future expansion.
200 * A caching mode must be selected. By default, the region is read-only
210 * frames. It may conflict with anonymous memory mappings and demand paging
212 * exactly what you are doing. If you need a chunk of memory, use k_mem_map().
213 * If you need a contiguous buffer of physical memory, statically declare it
220 * @param[in] phys Physical address base of the memory region
221 * @param[in] size Size of the memory region
228 * Unmap a virtual memory region from kernel's virtual address space.
231 * where temporary memory mappings need to be made. This allows these
232 * memory mappings to be discarded once they are no longer needed.
234 * This function alters the active page tables in the area reserved
243 * It is highly discouraged to use this function to unmap memory mappings.
244 * It may conflict with anonymous memory mappings and demand paging and
257 * Map memory into virtual address space with guard pages.
259 * This maps memory into virtual address space with a preceding and
260 * a succeeding guard pages. The memory mapped via this function must be
263 * This function maps a contiguous physical memory region into kernel's
269 * This function alters the active page tables in the area reserved
275 * with memory domain APIs after the mapping has been established. Setting
276 * K_MEM_PERM_USER here will allow all user threads to access this memory
279 * Unless K_MEM_MAP_UNINIT is used, the returned memory will be zeroed.
281 * The returned virtual memory pointer will be page-aligned. The size
282 * parameter, and any base address for re-mapping purposes must be page-
294 * @param phys Physical address base of the memory region if not requesting
295 * anonymous memory. Must be page-aligned.
296 * @param size Size of the memory mapping. This must be page-aligned.
298 * @param is_anon True is requesting mapping with anonymous memory.
300 * @return The mapped memory location, or NULL if insufficient virtual address
301 * space, insufficient physical memory to establish the mapping,
302 * or insufficient memory for paging structures.
307 * Un-map memory mapped via k_mem_map_phys_guard().
309 * This removes the memory mappings for the provided page-aligned region,
312 * This function alters the active page tables in the area reserved
320 * @param addr Page-aligned memory region base virtual address
321 * @param size Page-aligned memory region size
322 * @param is_anon True if the mapped memory is from anonymous memory.