Lines Matching +full:mapped +full:- +full:addr

4  * SPDX-License-Identifier: Apache-2.0
36 /** Write-through caching. Used by certain drivers. */
39 /** Full write-back caching. Any RAM mapped wants this. */
48 #define K_MEM_CACHE_MASK (BIT(3) - 1)
55 * Default is read-only, no user, no exec
60 /** Region will have read/write access (and not read-only) */
66 /** Region will be accessible to user mode (normally supervisor-only) */
77 /** Region will be mapped to 1:1 virtual and physical address */
98 * @brief The mapped region is not guaranteed to be zeroed.
111 * Such memory is guaranteed to never produce a page fault due to page-outs
112 * or copy-on-write once the mapping call has returned. Physical page frames
113 * will be pre-fetched as necessary and pinned.
118 * Region will be unpaged i.e. not mapped into memory
145 * concurrent memory mappings or page-ins take place.
167 * The mapped region is not guaranteed to be physically contiguous in memory.
171 * Pages mapped in this way have write-back cache settings.
173 * The returned virtual memory pointer will be page-aligned. The size
174 * parameter, and any base address for re-mapping purposes must be page-
184 * @param size Size of the memory mapping. This must be page-aligned.
186 * @return The mapped memory location, or NULL if insufficient virtual address
199 * This maps backing-store "location" tokens into Zephyr's address space.
202 * addresses in the mapped range are accessed.
216 * The provided backing-store "location" token must be linearly incrementable
219 * Allocated pages will have write-back cache settings.
221 * The returned virtual memory pointer will be page-aligned. The size
222 * parameter, and any base address for re-mapping purposes must be page-
230 * @param size Size of the memory mapping. This must be page-aligned.
244 * Un-map mapped memory
246 * This removes a memory mapping for the provided page-aligned region.
247 * Associated page frames will be free and the kernel may re-use the associated
250 * Calling this function on a region which was not mapped to begin with is
253 * @param addr Page-aligned memory region base virtual address
254 * @param size Page-aligned memory region size
256 static inline void k_mem_unmap(void *addr, size_t size) in k_mem_unmap() argument
258 k_mem_unmap_phys_guard(addr, size, true); in k_mem_unmap()
265 * page-aligned memory region.
267 * Calling this function on a region which was not mapped to begin with is
268 * undefined behavior. However system memory implicitly mapped at boot time
271 * @param addr Page-aligned memory region base virtual address
272 * @param size Page-aligned memory region size
276 int k_mem_update_flags(void *addr, size_t size, uint32_t flags);
286 * @param[in] addr Region base address
289 * @retval offset between aligned_addr and addr
292 uintptr_t addr, size_t size, size_t align);