Lines Matching +full:permission +full:- +full:flags
4 * SPDX-License-Identifier: Apache-2.0
36 /** Write-through caching. Used by certain drivers. */
39 /** Full write-back caching. Any RAM mapped wants this. */
43 * ARM64 Specific flags are defined in arch/arm64/arm_mem.h,
44 * pay attention to be not conflicted when updating these flags.
47 /** Reserved bits for cache modes in k_map() flags argument */
48 #define K_MEM_CACHE_MASK (BIT(3) - 1)
53 * @name Region permission attributes.
55 * Default is read-only, no user, no exec
60 /** Region will have read/write access (and not read-only) */
66 /** Region will be accessible to user mode (normally supervisor-only) */
92 * @name k_mem_map() control flags
111 * Such memory is guaranteed to never produce a page fault due to page-outs
112 * or copy-on-write once the mapping call has returned. Physical page frames
113 * will be pre-fetched as necessary and pinned.
145 * concurrent memory mappings or page-ins take place.
157 * provided flags argument.
160 * K_MEM_PERM_USER flags here; instead manage the region's permissions
171 * Pages mapped in this way have write-back cache settings.
173 * The returned virtual memory pointer will be page-aligned. The size
174 * parameter, and any base address for re-mapping purposes must be page-
181 * Many K_MEM_MAP_* flags have been implemented to alter the behavior of this
182 * function, with details in the documentation for these flags.
184 * @param size Size of the memory mapping. This must be page-aligned.
185 * @param flags K_MEM_PERM_*, K_MEM_MAP_* control flags.
190 static inline void *k_mem_map(size_t size, uint32_t flags) in k_mem_map() argument
192 return k_mem_map_phys_guard((uintptr_t)NULL, size, flags, true); in k_mem_map()
199 * This maps backing-store "location" tokens into Zephyr's address space.
206 * provided flags argument.
209 * K_MEM_PERM_USER flags here; instead manage the region's permissions
216 * The provided backing-store "location" token must be linearly incrementable
219 * Allocated pages will have write-back cache settings.
221 * The returned virtual memory pointer will be page-aligned. The size
222 * parameter, and any base address for re-mapping purposes must be page-
230 * @param size Size of the memory mapping. This must be page-aligned.
231 * @param flags K_MEM_PERM_*, K_MEM_MAP_* control flags.
236 static inline void *k_mem_map_unpaged(uintptr_t location, size_t size, uint32_t flags) in k_mem_map_unpaged() argument
238 flags |= K_MEM_MAP_UNPAGED; in k_mem_map_unpaged()
239 return k_mem_map_phys_guard(location, size, flags, false); in k_mem_map_unpaged()
244 * Un-map mapped memory
246 * This removes a memory mapping for the provided page-aligned region.
247 * Associated page frames will be free and the kernel may re-use the associated
253 * @param addr Page-aligned memory region base virtual address
254 * @param size Page-aligned memory region size
262 * Modify memory mapping attribute flags
264 * This updates caching, access and control flags for the provided
265 * page-aligned memory region.
271 * @param addr Page-aligned memory region base virtual address
272 * @param size Page-aligned memory region size
273 * @param flags K_MEM_PERM_*, K_MEM_MAP_* control flags.
276 int k_mem_update_flags(void *addr, size_t size, uint32_t flags);